OpenAI signs massive AI compute deal with Amazon

Deal will provide access to hundreds of thousands of Nvidia chips that power ChatGPT.

On Monday, OpenAI announced it has signed a seven-year, $38 billion deal to buy cloud services from Amazon Web Services to power products like ChatGPT and Sora. It’s the company’s first big computing deal after a fundamental restructuring last week that gave OpenAI more operational and financial freedom from Microsoft.

The agreement gives OpenAI access to hundreds of thousands of Nvidia graphics processors to train and run its AI models. “Scaling frontier AI requires massive, reliable compute,” OpenAI CEO Sam Altman said in a statement. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”

OpenAI will reportedly use Amazon Web Services immediately, with all planned capacity set to come online by the end of 2026 and room to expand further in 2027 and beyond. Amazon plans to roll out hundreds of thousands of chips, including Nvidia’s GB200 and GB300 AI accelerators, in data clusters built to power ChatGPT’s responses, generate AI videos, and train OpenAI’s next wave of models.

Read full article

Comments

YouTube denies AI was involved with odd removals of tech tutorials

YouTubers suspect AI is bizarrely removing popular video explainers.

This week, tech content creators began to suspect that AI was making it harder to share some of the most highly sought-after tech tutorials on YouTube, but now YouTube is denying that odd removals were due to automation.

Creators grew alarmed when educational videos that YouTube had allowed for years were suddenly being bizarrely flagged as “dangerous” or “harmful,” with seemingly no way to trigger human review to overturn removals. AI seemed to be running the show, with creators’ appeals seemingly getting denied faster than a human could possibly review them.

Late Friday, a YouTube spokesperson confirmed that videos flagged by Ars have been reinstated, promising that YouTube will take steps to ensure that similar content isn’t removed in the future. But, to creators, it remains unclear why the videos got taken down, as YouTube claimed that both initial enforcement decisions and decisions on appeals were not the result of an automation issue.

Read full article

Comments

Neural network finds an enzyme that can break down polyurethane

Given a dozen hours, the enzyme can turn a foam pad into reusable chemicals.

You’ll often hear plastic pollution referred to as a problem. But the reality is that it’s multiple problems. Depending on the properties we need, we form plastics out of different polymers, each of which is held together by a distinct type of chemical bond. So the method we use to break down one type of polymer may be incompatible with the chemistry of another.

That problem is why, even though we’ve had success finding enzymes that break down common plastics like polyesters and PET, they’re only partial solutions to plastic waste. However, researchers aren’t sitting back and basking in the triumph of partial solutions, and they’ve now got very sophisticated protein design tools to help them out.

That’s the story behind a completely new enzyme that researchers developed to break down polyurethane, the polymer commonly used to make foam cushioning, among other things. The new enzyme is compatible with an industrial-style recycling process that breaks the polymer down into its basic building blocks, which can be used to form fresh polyurethane.

Read full article

Comments

Cursor introduces its coding model alongside multi-agent interface

The vibe-coding IDE put an emphasis on speed with Composer.

Cursor has for the first time introduced what it claims is a competitive coding model, alongside the 2.0 version of its integrated development environment (IDE) with a new feature that allows running tasks with multiple agents in parallel.

The company’s flagship product is an IDE modeled after Visual Studio Code in many respects, but with a strong emphasis on vibe coding and heavier direct integration of large language model-based tools in the interface and workflow. Since its introduction, Cursor has supported models developed by other companies such as OpenAI, Google, and Anthropic. However, while it has trialed its own built-in models, they weren’t competitive with the big frontier models.

It’s a different story now, according to the company’s claims about Composer. Built with reinforcement learning and a mixture-of-experts architecture, Composer is dubbed by Cursor “a frontier model that is 4x faster than similarly intelligent models”—a significant claim when you consider what it’s competing with.

Read full article

Comments

“Unexpectedly, a deer briefly entered the family room”: Living with Gemini Home

Gemini for Home unleashes gen AI on your Nest camera footage, but it gets a lot wrong.

You just can’t ignore the effects of the generative AI boom.

Even if you don’t go looking for AI bots, they’re being integrated into virtually every product and service. And for what? There’s a lot of hand-wavey chatter about agentic this and AGI that, but what can “gen AI” do for you right now? Gemini for Home is Google’s latest attempt to make this technology useful, integrating Gemini with the smart home devices people already have. Anyone paying for extended video history in the Home app is about to get a heaping helping of AI, including daily summaries, AI-labeled notifications, and more.

Given the supposed power of AI models like Gemini, recognizing events in a couple of videos and answering questions about them doesn’t seem like a bridge too far. And yet Gemini for Home has demonstrated a tenuous grasp of the truth, which can lead to some disquieting interactions, like periodic warnings of home invasion, both human and animal.

Read full article

Comments

Caught cheating in class, college students “apologized” using AI—and profs called them out

Time for some “life lessons.”

With a child in college and a spouse who’s a professor, I have front-row access to the unfolding debacle that is “higher education in the age of AI.”

These days, students routinely submit even “personal reflection” papers that are AI generated. (And routinely appear surprised when caught.)

Read a paper longer than 10 pages? Not likely—even at elite schools. Toss that sucker into an AI tool and read a quick summary instead. It’s more efficient!

Read full article

Comments

Caught cheating in class, college students “apologized” using AI—and profs called them out

Time for some “life lessons.”

With a child in college and a spouse who’s a professor, I have front-row access to the unfolding debacle that is “higher education in the age of AI.”

These days, students routinely submit even “personal reflection” papers that are AI generated. (And routinely appear surprised when caught.)

Read a paper longer than 10 pages? Not likely—even at elite schools. Toss that sucker into an AI tool and read a quick summary instead. It’s more efficient!

Read full article

Comments

ChatGPT maker reportedly eyes $1 trillion IPO despite major quarterly losses

It could be “one of the biggest IPOs of all time,” according to Reuters.

On Tuesday, OpenAI CEO Sam Altman told Reuters during a livestream that going public “is the most likely path for us, given the capital needs that we’ll have.” Now sources familiar with the matter say the ChatGPT maker is preparing for an initial public offering that could value the company at up to $1 trillion, with filings possible as early as the second half of 2026. However, news of the potential IPO comes as the company faces mounting losses that may have reached as much as $11.5 billion in the most recent quarter, according to one estimate.

Going public could give OpenAI more efficient access to capital and enable larger acquisitions using public stock, helping finance Altman’s plans to spend trillions of dollars on AI infrastructure, according to people familiar with the company’s thinking who spoke with Reuters. Chief Financial Officer Sarah Friar has reportedly told some associates the company targets a 2027 IPO listing, while some financial advisors predict 2026 could be possible.

Three people with knowledge of the plans told Reuters that OpenAI has discussed raising $60 billion at the low end in preliminary talks. That figure refers to how much money the company would raise by selling shares to investors, not the total worth of the company. If OpenAI sold that amount of stock while keeping most shares private, the entire company could be valued at $1 trillion or more. The final figures and timing will likely change based on business growth and market conditions.

Read full article

Comments

After teen death lawsuits, Character.AI will restrict chats for under-18 users

AI companion app faces legal and regulatory pressure over child safety concerns.

On Wednesday, Character.AI announced it will bar anyone under the age of 18 from open-ended chats with its AI characters starting on November 25, implementing one of the most restrictive age policies yet among AI chatbot platforms. The company faces multiple lawsuits from families who say its chatbots contributed to teenager deaths by suicide.

Over the next month, Character.AI says it will ramp down chatbot use among minors by identifying them and placing a two-hour daily limit on their chatbot access. The company plans to use technology to detect underage users based on conversations and interactions on the platform, as well as information from connected social media accounts. On November 25, those users will no longer be able to create or talk to chatbots, though they can still read previous conversations. The company said it is working to build alternative features for users under the age of 18, such as the ability to create videos, stories, and streams with AI characters.

Character.AI CEO Karandeep Anand told The New York Times that the company wants to set an example for the industry. “We’re making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them,” Anand said in the interview. The company also plans to establish an AI safety lab.

Read full article

Comments

Meta denies torrenting porn to train AI, says downloads were for “personal use”

Meta says lawsuit claiming it pirated porn to train AI makes no sense.

This week, Meta asked a US district court to toss a lawsuit alleging that the tech giant illegally torrented pornography to train AI.

The move comes after Strike 3 Holdings discovered illegal downloads of some of its adult films on Meta corporate IP addresses, as well as other downloads that Meta allegedly concealed using a “stealth network” of 2,500 “hidden IP addresses.” Accusing Meta of stealing porn to secretly train an unannounced adult version of its AI model powering Movie Gen, Strike 3 sought damages that could have exceeded $350 million, TorrentFreak reported.

Filing a motion to dismiss the lawsuit on Monday, Meta accused Strike 3 of relying on “guesswork and innuendo,” while writing that Strike 3 “has been labeled by some as a ‘copyright troll’ that files extortive lawsuits.” Requesting that all copyright claims be dropped, Meta argued that there was no evidence that the tech giant directed any of the downloads of about 2,400 adult movies owned by Strike 3—or was even aware of the illegal activity.

Read full article

Comments