OpenAI slams court order that lets NYT read 20 million complete user chats

OpenAI: NYT wants evidence of ChatGPT users trying to get around news paywall.

OpenAI wants a court to reverse a ruling forcing the ChatGPT maker to give 20 million user chats to The New York Times and other news plaintiffs that sued it over alleged copyright infringement. Although OpenAI previously offered 20 million user chats as a counter to the NYT’s demand for 120 million, the AI company says a court order requiring production of the chats is too broad.

“The logs at issue here are complete conversations: each log in the 20 million sample represents a complete exchange of multiple prompt-output pairs between a user and ChatGPT,” OpenAI said today in a filing in US District Court for the Southern District of New York. “Disclosure of those logs is thus much more likely to expose private information [than individual prompt-output pairs], in the same way that eavesdropping on an entire conversation reveals more private information than a 5-second conversation fragment.”

OpenAI’s filing said that “more than 99.99%” of the chats “have nothing to do with this case.” It asked the district court to “vacate the order and order News Plaintiffs to respond to OpenAI’s proposal for identifying relevant logs.” OpenAI could also seek review in a federal court of appeals.

Read full article

Comments

Meta’s star AI scientist Yann LeCun plans to leave for own startup

AI pioneer reportedly frustrated with Meta’s shift from research to rapid product releases.

Meta’s chief AI scientist and Turing Award winner Yann LeCun plans to leave the company to launch his own startup focused on a different type of AI called “world models,” the Financial Times reported. The French-US scientist has reportedly told associates he will depart in the coming months and is already in early talks to raise funds for the new venture. The departure comes as CEO Mark Zuckerberg radically overhauled Meta’s AI operations after deciding the company had fallen behind rivals such as OpenAI and Google.

World models are hypothetical AI systems that some AI engineers expect to develop an internal “understanding” of the physical world by learning from video and spatial data rather than text alone. Unlike current large language models (such as the kind that power ChatGPT) that predict the next segment of data in a sequence, world models would ideally simulate cause-and-effect scenarios, understand physics, and enable machines to reason and plan more like animals do. LeCun has said this architecture could take a decade to fully develop.

While some AI experts believe that Transformer-based AI models—such as large language models, video synthesis models, and interactive world synthesis models—have emergently modeled physics or absorbed the structural rules of the physical world from training data examples, the evidence so far generally points to sophisticated pattern-matching rather than a base understanding of how the physical world actually works.

Read full article

Comments

Lilbits: A new modular PC input device, Google brings even more AI to Pixel phones, Apple’s $230 knit iPhone holder

Over the past few years I’ve seen a couple of different widescreen touchscreen displays that are designed to be used as both a second screen and an input device for a computer. But the UltraBar X stands out for its unusual, modular design. At its…

Over the past few years I’ve seen a couple of different widescreen touchscreen displays that are designed to be used as both a second screen and an input device for a computer. But the UltraBar X stands out for its unusual, modular design. At its most basic, this device is just another 7 inch ultrawide touchscreen […]

The post Lilbits: A new modular PC input device, Google brings even more AI to Pixel phones, Apple’s $230 knit iPhone holder appeared first on Liliputing.

Google says new cloud-based “Private AI Compute” is just as secure as local processing

New system allows devices to connect directly to secure space in Google’s AI servers.

Google’s current mission is to weave generative AI into as many products as it can, getting everyone accustomed to, and maybe even dependent on, working with confabulatory robots. That means it needs to feed the bots a lot of your data, and that’s getting easier with the company’s new Private AI Compute. Google claims its new secure cloud environment will power better AI experiences without sacrificing your privacy.

The pitch sounds a lot like Apple’s Private Cloud Compute. Google’s Private AI Compute runs on “one seamless Google stack” powered by the company’s custom Tensor Processing Units (TPUs). These chips have integrated secure elements, and the new system allows devices to connect directly to the protected space via an encrypted link.

Google’s TPUs rely on an AMD-based Trusted Execution Environment (TEE) that encrypts and isolates memory from the host. Theoretically, that means no one else—not even Google itself—can access your data. Google says independent analysis by NCC Group shows that Private AI Compute meets its strict privacy guidelines.

Read full article

Comments

Google announces even more AI in Photos app, powered by Nano Banana

Google’s Nano Banana is powering a raft of new features in the app.

We’re running out of ways to tell you that Google is releasing more generative AI features, but that’s what’s happening in Google Photos today. The Big G is finally making good on its promise to add its market-leading Nano Banana image-editing model to the app. The model powers a couple of features, and it’s not just for Google’s Android platform. Nano Banana edits are also coming to the iOS version of the app.

Nano Banana started making waves when it appeared earlier this year as an unbranded demo. You simply feed the model an image and tell it what edits you want to see. Google said Nano Banana was destined for the Photos app back in October, but it’s only now beginning the rollout. The Photos app already had conversational editing in the “Help Me Edit” feature, but it was running an older non-fruit model that produced inferior results. Nano Banana editing will produce AI slop, yes, but it’s better slop.

Nano Banana in Help me edit

Google says the updated Help Me Edit feature has access to your private face groups, so you can use names in your instructions. For example, you could type “Remove Riley’s sunglasses,” and Nano Banana will identify Riley in the photo (assuming you have a person of that name saved) and make the edit without further instructions. You can also ask for more fantastical edits in Help Me Edit, changing the style of the image from top to bottom.

Read full article

Comments

You won’t believe the excuses lawyers have after getting busted for using AI

I got hacked; I lost my login; it was a rough draft; toggling windows is hard.

Amid what one judge called an “epidemic” of fake AI-generated case citations bogging down courts, some common excuses are emerging from lawyers hoping to dodge the most severe sanctions for filings deemed misleading.

Using a database compiled by French lawyer and AI researcher Damien Charlotin, Ars reviewed 23 cases where lawyers were sanctioned for AI hallucinations. In many, judges noted that the simplest path to avoid or diminish sanctions was to admit that AI was used as soon as it’s detected, act humble, self-report the error to relevant legal associations, and voluntarily take classes on AI and law. But not every lawyer takes the path of least resistance, Ars’ review found, with many instead offering excuses that no judge found credible. Some even lie about their AI use, judges concluded.

Since 2023—when fake AI citations started being publicized—the most popular excuse has been that the lawyer didn’t know AI was used to draft a filing.

Read full article

Comments

Researchers isolate memorization from reasoning in AI neural networks

Basic arithmetic ability lives in the memorization pathways, not logic circuits.

When engineers build AI language models like GPT-5 from training data, at least two major processing features emerge: memorization (reciting exact text they’ve seen before, like famous quotes or passages from books) and reasoning (solving new problems using general principles). New research from AI startup Goodfire.ai provides the first potentially clear evidence that these different functions actually work through completely separate neural pathways in the model’s architecture.

The researchers discovered that this separation proves remarkably clean. In a preprint paper released in late October, they described that when they removed the memorization pathways, models lost 97 percent of their ability to recite training data verbatim but kept nearly all their “logical reasoning” ability intact.

For example, at layer 22 in Allen Institute for AI’s OLMo-7B language model, the bottom 50 percent of weight components showed 23 percent higher activation on memorized data, while the top 10 percent showed 26 percent higher activation on general, non-memorized text. This mechanistic split enabled the researchers to surgically remove memorization while preserving other capabilities.

Read full article

Comments

Olares One is an AI-focused “personal cloud” mini PC with NVIDIA RTX 5090 and Intel Core Ultra 9 275HX (crowdfunding)

The Olares One is a small desktop computer with the guts of a high-end gaming laptop including an Intel Core Ultra 9 275HX processor, NVIDIA GeForce RTX 5090 mobile graphics with 24GB of GDDR7 RAM, 96GB of DDR5 memory and a 2TB PCIe 4.0 NVMe SSD. But i…

The Olares One is a small desktop computer with the guts of a high-end gaming laptop including an Intel Core Ultra 9 275HX processor, NVIDIA GeForce RTX 5090 mobile graphics with 24GB of GDDR7 RAM, 96GB of DDR5 memory and a 2TB PCIe 4.0 NVMe SSD. But it’s not really positioned as a gaming PC. Instead, […]

The post Olares One is an AI-focused “personal cloud” mini PC with NVIDIA RTX 5090 and Intel Core Ultra 9 275HX (crowdfunding) appeared first on Liliputing.

Researchers surprised that with AI, toxicity is harder to fake than intelligence

New “computational Turing test” reportedly catches AI pretending to be human with 80% accuracy.

The next time you encounter an unusually polite reply on social media, you might want to check twice. It could be an AI model trying (and failing) to blend in with the crowd.

On Wednesday, researchers from the University of Zurich, University of Amsterdam, Duke University, and New York University released a study revealing that AI models remain easily distinguishable from humans in social media conversations, with overly friendly emotional tone serving as the most persistent giveaway. The research, which tested nine open-weight models across Twitter/X, Bluesky, and Reddit, found that classifiers developed by the researchers detected AI-generated replies with 70 to 80 percent accuracy.

The study introduces what the authors call a “computational Turing test” to assess how closely AI models approximate human language. Instead of relying on subjective human judgment about whether text sounds authentic, the framework uses automated classifiers and linguistic analysis to identify specific features that distinguish machine-generated from human-authored content.

Read full article

Comments

Oddest ChatGPT leaks yet: Cringey chat logs found in Google analytics tool

ChatGPT leaks seem to confirm OpenAI scrapes Google, expert says.

For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination: Google Search Console (GSC), a tool that developers typically use to monitor search traffic, not lurk private chats.

Normally, when site managers access GSC performance reports, they see queries based on keywords or short phrases that Internet users type into Google to find relevant content. But starting this September, odd queries, sometimes more than 300 characters long, could also be found in GSC. Showing only user inputs, the chats appeared to be from unwitting people prompting a chatbot to help solve relationship or business problems, who likely expected those conversations would remain private.

Jason Packer, owner of an analytics consulting firm called Quantable, was among the first to flag the issue in a detailed blog last month.

Read full article

Comments