OpenAI CEO declares “code red” as Gemini gains 200 million users in 3 months

Three years after Google sounded alarm bells over ChatGPT, the tables have turned.

The shoe is most certainly on the other foot. On Monday, OpenAI CEO Sam Altman reportedly declared a “code red” at the company to improve ChatGPT, delaying advertising plans and other products in the process,  The Information reported based on a leaked internal memo. The move follows Google’s release of its Gemini 3 model last month, which has outperformed ChatGPT on some industry benchmark tests and sparked high-profile praise on social media.

In the memo, Altman wrote, “We are at a critical time for ChatGPT.” The company will push back work on advertising integration, AI agents for health and shopping, and a personal assistant feature called Pulse. Altman encouraged temporary team transfers and established daily calls for employees responsible for enhancing the chatbot.

The directive creates an odd symmetry with events from December 2022, when Google management declared its own “code red” internal emergency after ChatGPT launched and rapidly gained in popularity. At the time, Google CEO Sundar Pichai reassigned teams across the company to develop AI prototypes and products to compete with OpenAI’s chatbot. Now, three years later, the AI industry is in a very different place.

Read full article

Comments

Syntax hacking: Researchers discover sentence structure can bypass AI safety rules

New research offers clues about why some prompt injection attacks may succeed.

Researchers from MIT, Northeastern University, and Meta recently released a paper suggesting that large language models (LLMs) similar to those that power ChatGPT may sometimes prioritize sentence structure over meaning when answering questions. The findings reveal a weakness in how these models process instructions that may shed light on why some prompt injection or jailbreaking approaches work, though the researchers caution their analysis of some production models remains speculative since training data details of prominent commercial AI models are not publicly available.

The team, led by Chantal Shaib and Vinith M. Suriyakumar, tested this by asking models questions with preserved grammatical patterns but nonsensical words. For example, when prompted with “Quickly sit Paris clouded?” (mimicking the structure of “Where is Paris located?”), models still answered “France.”

This suggests models absorb both meaning and syntactic patterns, but can overrely on structural shortcuts when they strongly correlate with specific domains in training data, which sometimes allows patterns to override semantic understanding in edge cases. The team plans to present these findings at NeurIPS later this month.

Read full article

Comments

Google tells employees it must double capacity every 6 months to meet AI demand

Google’s AI infrastructure chief tells staff it needs thousandfold capacity increase in 5 years.

While AI bubble talk fills the air these days, with fears of overinvestment that could pop at any time, something of a contradiction is brewing on the ground: Companies like Google and OpenAI can barely build infrastructure fast enough to fill their AI needs.

During an all-hands meeting earlier this month, Google’s AI infrastructure head Amin Vahdat told employees that the company must double its serving capacity every six months to meet demand for artificial intelligence services, reports CNBC. Vahdat, a vice president at Google Cloud, presented slides showing the company needs to scale “the next 1000x in 4-5 years.”

While a thousandfold increase in compute capacity sounds ambitious by itself, Vahdat noted some key constraints: Google needs to be able to deliver this increase in capability, compute, and storage networking “for essentially the same cost and increasingly, the same power, the same energy level,” he told employees during the meeting. “It won’t be easy but through collaboration and co-design, we’re going to get there.”

Read full article

Comments

In 1982, a physics joke gone wrong sparked the invention of the emoticon

A simple proposal on a 1982 electronic bulletin board helped sarcasm flourish online.

On September 19, 1982, Carnegie Mellon University computer science research assistant professor Scott Fahlman posted a message to the university’s bulletin board software that would later come to shape how people communicate online. His proposal: use :-) and :-( as markers to distinguish jokes from serious comments. While Fahlman describes himself as “the inventor…or at least one of the inventors” of what would later be called the smiley face emoticon, the full story reveals something more interesting than a lone genius moment.

The whole episode started three days earlier when computer scientist Neil Swartz posed a physics problem to colleagues on Carnegie Mellon’s “bboard,” which was an early online message board. The discussion thread had been exploring what happens to objects in a free-falling elevator, and Swartz presented a specific scenario involving a lit candle and a drop of mercury.

That evening, computer scientist Howard Gayle responded with a facetious message titled “WARNING!” He claimed that an elevator had been “contaminated with mercury” and suffered “some slight fire damage” due to a physics experiment. Despite clarifying posts noting the warning was a joke, some people took it seriously.

Read full article

Comments

Celebrated game developer Rebecca Heineman dies at age 62

The gaming community mourns a beloved mentor and LGBTQ+ advocate with a storied career.

On Monday, veteran game developer Rebecca Ann Heineman died in Rockwall, Texas, at age 62 after a battle with adenocarcinoma. Apogee founder Scott Miller first shared the news publicly on social media, and her son William confirmed her death with Ars Technica. Heineman’s GoFundMe page, which displayed a final message she had posted about entering palliative care, will now help her family with funeral costs.

Rebecca “Burger Becky” Heineman was born in October 1963 and grew up in Whittier, California. She first gained national recognition in 1980 when she won the national Atari 2600 Space Invaders championship in New York at age 16, becoming the first formally recognized US video game champion. That victory launched a career spanning more than four decades and 67 credited games, according to MobyGames.

Among many achievements in her life, Heineman was perhaps best known for co-founding Interplay Productions with Brian Fargo, Jay Patel, and Troy Worrell in 1983. The company created franchises like Wasteland, Fallout, and Baldur’s Gate. At Interplay, Heineman designed The Bard’s Tale III: Thief of Fate and Dragon Wars while also programming ports of classics like Wolfenstein 3D and Battle Chess.

Read full article

Comments

Tech giants pour billions into Anthropic as circular AI investments roll on

ChatGPT competitor secures billions from Microsoft and Nvidia in deal to use cloud services and chips.

On Tuesday, Microsoft and Nvidia announced plans to invest in Anthropic under a new partnership that includes a $30 billion commitment by the Claude maker to use Microsoft’s cloud services. Nvidia will commit up to $10 billion to Anthropic and Microsoft up to $5 billion, with both companies investing in Anthropic’s next funding round.

The deal brings together two companies that have backed OpenAI and connects them more closely to one of the ChatGPT maker’s main competitors. Microsoft CEO Satya Nadella said in a video that OpenAI “remains a critical partner,” while adding that the companies will increasingly be customers of each other.

“We will use Anthropic models, they will use our infrastructure, and we’ll go to market together,” Nadella said.

Read full article

Comments

Google CEO: If an AI bubble pops, no one is getting out clean

Sundar Pichai says no company is immune if AI bubble bursts, echoing dotcom fears.

On Tuesday, Alphabet CEO Sundar Pichai warned of “irrationality” in the AI market, telling the BBC in an interview, “I think no company is going to be immune, including us.” His comments arrive as scrutiny over the state of the AI market has reached new heights, with Alphabet shares doubling in value over seven months to reach a $3.5 trillion market capitalization.

Speaking exclusively to the BBC at Google’s California headquarters, Pichai acknowledged that while AI investment growth is at an “extraordinary moment,” the industry can “overshoot” in investment cycles, as we’re seeing now. He drew comparisons to the late 1990s Internet boom, which saw early Internet company valuations surge before collapsing in 2000, leading to bankruptcies and job losses.

“We can look back at the Internet right now. There was clearly a lot of excess investment, but none of us would question whether the Internet was profound,” Pichai said. “I expect AI to be the same. So I think it’s both rational and there are elements of irrationality through a moment like this.”

Read full article

Comments

Forget AGI—Sam Altman celebrates ChatGPT finally following em dash formatting rules

Ongoing struggles with AI model instruction-following show that true human-level AI still a ways off.

Em dashes have become what many believe to be a telltale sign of AI-generated text over the past few years. The punctuation mark appears frequently in outputs from ChatGPT and other AI chatbots, sometimes to the point where readers believe they can identify AI writing by its overuse alone—although people can overuse it, too.

On Thursday evening, OpenAI CEO Sam Altman posted on X that ChatGPT has started following custom instructions to avoid using em dashes. “Small-but-happy win: If you tell ChatGPT not to use em-dashes in your custom instructions, it finally does what it’s supposed to do!” he wrote.

The post, which came two days after the release of OpenAI’s new GPT-5.1 AI model, received mixed reactions from users who have struggled for years with getting the chatbot to follow specific formatting preferences. And this “small win” raises a very big question: If the world’s most valuable AI company has struggled with controlling something as simple as punctuation use after years of trying, perhaps what people call artificial general intelligence (AGI) is farther off than some in the industry claim.

Read full article

Comments

OpenAI walks a tricky tightrope with GPT-5.1’s eight new personalities

New controls attempt to please critics on both sides with a balance between bland and habit-forming.

On Wednesday, OpenAI released GPT-5.1 Instant and GPT-5.1 Thinking, two updated versions of its flagship AI models now available in ChatGPT. The company is wrapping the models in the language of anthropomorphism, claiming that they’re warmer, more conversational, and better at following instructions.

The release follows complaints earlier this year that its previous models were excessively cheerful and sycophantic, along with an opposing controversy among users over how OpenAI modified the default GPT-5 output style after several suicide lawsuits.

The company now faces intense scrutiny from lawyers and regulators that could threaten its future operations. In that kind of environment, it’s difficult to just release a new AI model, throw out a few stats, and move on like the company could even a year ago. But here are the basics: The new GPT-5.1 Instant model will serve as ChatGPT’s faster default option for most tasks, while GPT-5.1 Thinking is a simulated reasoning model that attempts to handle more complex problem-solving tasks.

Read full article

Comments

Meta’s star AI scientist Yann LeCun plans to leave for own startup

AI pioneer reportedly frustrated with Meta’s shift from research to rapid product releases.

Meta’s chief AI scientist and Turing Award winner Yann LeCun plans to leave the company to launch his own startup focused on a different type of AI called “world models,” the Financial Times reported. The French-US scientist has reportedly told associates he will depart in the coming months and is already in early talks to raise funds for the new venture. The departure comes as CEO Mark Zuckerberg radically overhauled Meta’s AI operations after deciding the company had fallen behind rivals such as OpenAI and Google.

World models are hypothetical AI systems that some AI engineers expect to develop an internal “understanding” of the physical world by learning from video and spatial data rather than text alone. Unlike current large language models (such as the kind that power ChatGPT) that predict the next segment of data in a sequence, world models would ideally simulate cause-and-effect scenarios, understand physics, and enable machines to reason and plan more like animals do. LeCun has said this architecture could take a decade to fully develop.

While some AI experts believe that Transformer-based AI models—such as large language models, video synthesis models, and interactive world synthesis models—have emergently modeled physics or absorbed the structural rules of the physical world from training data examples, the evidence so far generally points to sophisticated pattern-matching rather than a base understanding of how the physical world actually works.

Read full article

Comments