OpenAI CEO declares “code red” as Gemini gains 200 million users in 3 months

Three years after Google sounded alarm bells over ChatGPT, the tables have turned.

The shoe is most certainly on the other foot. On Monday, OpenAI CEO Sam Altman reportedly declared a “code red” at the company to improve ChatGPT, delaying advertising plans and other products in the process,  The Information reported based on a leaked internal memo. The move follows Google’s release of its Gemini 3 model last month, which has outperformed ChatGPT on some industry benchmark tests and sparked high-profile praise on social media.

In the memo, Altman wrote, “We are at a critical time for ChatGPT.” The company will push back work on advertising integration, AI agents for health and shopping, and a personal assistant feature called Pulse. Altman encouraged temporary team transfers and established daily calls for employees responsible for enhancing the chatbot.

The directive creates an odd symmetry with events from December 2022, when Google management declared its own “code red” internal emergency after ChatGPT launched and rapidly gained in popularity. At the time, Google CEO Sundar Pichai reassigned teams across the company to develop AI prototypes and products to compete with OpenAI’s chatbot. Now, three years later, the AI industry is in a very different place.

Read full article

Comments

Google announces second Android 16 release of 2025 is heading to Pixels

The update is rolling out to Pixels starting today.

Google is following through on its pledge to split Android versions into more frequent updates. We already had one Android 16 release this year, and now it’s time for the second. The new version is rolling out first on Google’s Pixel phones, featuring more icon customization, easier parental controls, and AI-powered notifications. Don’t be bummed if you aren’t first in line for the new Android 16—Google also has a raft of general improvements coming to the wider Android ecosystem.

Android 16, part 2

Since rolling out the first version of Android in 2008, Google has largely stuck to one major release per year. Android 16 changes things, moving from one monolithic release to two. Today’s OS update is the second part of the Android 16 era, but don’t expect major changes. As expected, the first release in June made more changes. Most of what we’ll see in the second update is geared toward Google’s Pixel phones, plus some less notable changes for developers.

Google’s new AI features for notifications are probably the most important change. Android 16 will use AI for two notification tasks: summarizing and organizing. The OS will take long chat conversations and summarize the notifications with AI. Notification data is processed locally on the device and won’t be uploaded anywhere. In the notification shade, the collapsed notification line will feature a summary of the conversation rather than a snippet of one message. Expanding the notification will display the full text.

Read full article

Comments

Syntax hacking: Researchers discover sentence structure can bypass AI safety rules

New research offers clues about why some prompt injection attacks may succeed.

Researchers from MIT, Northeastern University, and Meta recently released a paper suggesting that large language models (LLMs) similar to those that power ChatGPT may sometimes prioritize sentence structure over meaning when answering questions. The findings reveal a weakness in how these models process instructions that may shed light on why some prompt injection or jailbreaking approaches work, though the researchers caution their analysis of some production models remains speculative since training data details of prominent commercial AI models are not publicly available.

The team, led by Chantal Shaib and Vinith M. Suriyakumar, tested this by asking models questions with preserved grammatical patterns but nonsensical words. For example, when prompted with “Quickly sit Paris clouded?” (mimicking the structure of “Where is Paris located?”), models still answered “France.”

This suggests models absorb both meaning and syntactic patterns, but can overrely on structural shortcuts when they strongly correlate with specific domains in training data, which sometimes allows patterns to override semantic understanding in edge cases. The team plans to present these findings at NeurIPS later this month.

Read full article

Comments

OpenAI desperate to avoid explaining why it deleted pirated book datasets

OpenAI risks increased fines after deleting pirated books datasets.

OpenAI may soon be forced to explain why it deleted a pair of controversial datasets composed of pirated books, and the stakes could not be higher.

At the heart of a class-action lawsuit from authors alleging that ChatGPT was illegally trained on their works, OpenAI’s decision to delete the datasets could end up being a deciding factor that gives the authors the win.

It’s undisputed that OpenAI deleted the datasets, known as “Books 1” and “Books 2,” prior to ChatGPT’s release in 2022. Created by former OpenAI employees in 2021, the datasets were built by scraping the open web and seizing the bulk of its data from a shadow library called Library Genesis (LibGen).

Read full article

Comments

OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide

OpenAI’s response to teen suicide case is “disturbing,” lawyer says.

Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.

Read full article

Comments

HP plans to save millions by laying off thousands, ramping up AI use

Product development, internal operations among teams expected to be hit hardest.

HP Inc. said that it will lay off 4,000 to 6,000 employees in favor of AI deployments, claiming it will help save $1 billion in annualized gross run rate by the end of its fiscal 2028.

HP expects to complete the layoffs by the end of that fiscal year. The reductions will largely hit product development, internal operations, and customer support, HP CEO Enrique Lores said during an earnings call on Tuesday.

Using AI, HP will “accelerate product innovation, improve customer satisfaction, and boost productivity,” Lores said.

Read full article

Comments

Vision Pro M5 review: It’s time for Apple to make some tough choices

A state of the union from someone who actually sort of uses the thing.

With the recent releases of visionOS 26 and newly refreshed Vision Pro hardware, it’s an ideal time to check in on Apple’s Vision Pro headset—a device I was simultaneously amazed and disappointed by when it launched in early 2024.

I still like the Vision Pro, but I can tell it’s hanging on by a thread. Content is light, developer support is tepid, and while Apple has taken action to improve both, it’s not enough, and I’m concerned it might be too late.

When I got a Vision Pro, I used it a lot: I watched movies on planes and in hotel rooms, I walked around my house placing application windows and testing out weird new ways of working. I tried all the neat games and educational apps, and I watched all the immersive videos I could get ahold of. I even tried my hand at developing my own applications for it.

Read full article

Comments

Anthropic introduces cheaper, more powerful, more efficient Opus 4.5 model

Longer chats address a long-standing criticism of Claude.

Anthropic today released Opus 4.5, its flagship frontier model, and it brings improvements in coding performance, as well as some user experience improvements that make it more generally competitive with OpenAI’s latest frontier models.

Perhaps the most prominent change for most users is that in the consumer app experiences (web, mobile, and desktop), Claude will be less prone to abruptly hard-stopping conversations because they have run too long. The improvement to memory within a single conversation applies not just to Opus 4.5, but to any current Claude models in the apps.

Users who experienced abrupt endings (despite having room left in their session and weekly usage budgets) were hitting a hard context window (200,000 tokens). Whereas some large language model implementations simply start trimming earlier messages from the context when a conversation runs past the maximum in the window, Claude simply ended the conversation rather than allow the user to experience an increasingly incoherent conversation where the model would start forgetting things based on how old they are.

Read full article

Comments

UK government will buy tech to boost AI sector in $130M growth push

Plan will offer guaranteed payments for British startups making AI hardware

The UK government will promise to buy emerging chip technology from British companies in a 100 million pound ($130 million) bid to boost growth by supporting the artificial intelligence sector.

Liz Kendall, the science secretary, said the government would offer guaranteed payments to British startups producing AI hardware that can help sectors such as life sciences and financial services.

Under a “first customer” promise modeled on the way the government bought COVID vaccines, Kendall’s department will commit in advance to buying AI inference chips that meet set performance standards.

Read full article

Comments

“Go generate a bridge and jump off it”: How video pros are navigating AI

I talked with nine creators about economic pressures and fan backlash.

In 2016, the legendary Japanese filmmaker Hayao Miyazaki was shown a bizarre AI-generated video of a misshapen human body crawling across a floor.

Miyazaki declared himself “utterly disgusted” by the technology demo, which he considered an “insult to life itself.”

“If you really want to make creepy stuff, you can go ahead and do it,” Miyazaki said. “I would never wish to incorporate this technology into my work at all.”

Read full article

Comments