Science-centric streaming service Curiosity Stream is an AI-licensing firm now

Curiosity Stream’s owner has more content for AI companies than it does for subscribers.

We all know streaming services’ usual tricks for making more money: get more subscribers, charge those subscribers more money, and sell ads. But science streaming service Curiosity Stream is taking a new route that could reshape how streaming companies, especially niche options, try to survive.

Discovery Channel founder John Hendricks launched Curiosity Stream in 2015. The streaming service costs $40 per year, and it doesn’t have commercials.

The streaming business has grown to also include the Curiosity Channel TV channel. CuriosityStream Inc. also makes money through original programming and its Curiosity University educational programming. The firm turned its first positive net income in its fiscal Q1 2025, after about a decade of business.

Read full article

Comments

Google tells employees it must double capacity every 6 months to meet AI demand

Google’s AI infrastructure chief tells staff it needs thousandfold capacity increase in 5 years.

While AI bubble talk fills the air these days, with fears of overinvestment that could pop at any time, something of a contradiction is brewing on the ground: Companies like Google and OpenAI can barely build infrastructure fast enough to fill their AI needs.

During an all-hands meeting earlier this month, Google’s AI infrastructure head Amin Vahdat told employees that the company must double its serving capacity every six months to meet demand for artificial intelligence services, reports CNBC. Vahdat, a vice president at Google Cloud, presented slides showing the company needs to scale “the next 1000x in 4-5 years.”

While a thousandfold increase in compute capacity sounds ambitious by itself, Vahdat noted some key constraints: Google needs to be able to deliver this increase in capability, compute, and storage networking “for essentially the same cost and increasingly, the same power, the same energy level,” he told employees during the meeting. “It won’t be easy but through collaboration and co-design, we’re going to get there.”

Read full article

Comments

AI trained on bacterial genomes produces never-before-seen proteins

Genes with related functions cluster together, and the AI uses that.

AI systems have recently had a lot of success in one key aspect of biology: the relationship between a protein’s structure and its function. These efforts have included the ability to predict the structure of most proteins and to design proteins structured so that they perform useful functions. But all of these efforts are focused on the proteins and amino acids that build them.

But biology doesn’t generate new proteins at that level. Instead, changes have to take place at the nucleic acid level before eventually making their presence felt at the protein level. And the DNA level is fairly removed from proteins, with lots of critical non-coding sequences, redundancy, and a fair degree of flexibility. It’s not necessarily obvious that learning the organization of a genome would help an AI system figure out how to make functional proteins.

But it now seems like using bacterial genomes for the training can help develop a system that can predict proteins, some of which don’t look like anything we’ve ever seen before.

Read full article

Comments

Trump revives unpopular Ted Cruz plan to punish states that impose AI laws

Cruz plan to block broadband funding lost 99-1, but now it’s back—in Trump form.

President Trump is considering an executive order that would require the federal government to file lawsuits against states with AI laws, and prevent states with AI laws from obtaining broadband funding.

The draft order, “Eliminating State Law Obstruction of National AI Policy,” would order the attorney general to “establish an AI Litigation Task Force whose sole responsibility shall be to challenge State AI laws, including on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful in the Attorney General’s judgment.”

The draft order says the Trump administration “will act to ensure that there is a minimally burdensome national standard—not 50 discordant State ones.” It specifically names laws enacted by California and Colorado and directs the Secretary of Commerce to evaluate whether other laws should be challenged.

Read full article

Comments

Google’s new Nano Banana Pro uses Gemini 3 power to generate more realistic AI images

Google’s new image-generator model is available to try globally today.

Google’s meme-friendly Nano Banana image-generation model is getting an upgrade. The new Nano Banana Pro is rolling out with improved reasoning and instruction following, giving users the ability to create more accurate images with legible text and make precise edits to existing images. It’s available to everyone in the Gemini app, but free users will find themselves up against the usage limits pretty quickly.

Nano Banana Pro is part of the newly launched Gemini 3 Pro—it’s actually called Gemini 3 Pro Image in the same way the original is Gemini 2.5 Flash Image, but Google is sticking with the meme-y name. You can access it by selecting Gemini 3 Pro and then turning on the “Create images” option.

Nano Banana Pro: Your new creative partner.

Google says the new model can follow complex prompts to create more accurate images. The model is apparently so capable it can generate an entire usable infographic in a single shot with no weird AI squiggles in place of words. Nano Banana Pro is also better at maintaining consistency in images. You can blend up to 14 images with this tool, and it can maintain the appearance of up to five people in outputs.

Read full article

Comments

“We’re in an LLM bubble,” Hugging Face CEO says—but not an AI one

The risks of AI investment in manufacturing and other areas are less clear.

There’s been a lot of talk of an AI bubble lately, especially with regards to circular funding involving companies like OpenAI and Anthropic—but Clem Delangue, CEO of machine learning resources hub Hugging Face, has made the case that the bubble is specific to large language models, which is just one application of AI.

“I think we’re in an LLM bubble, and I think the LLM bubble might be bursting next year,” he said at an Axios event this week, as quoted in a TechCrunch article. “But ‘LLM’ is just a subset of AI when it comes to applying AI to biology, chemistry, image, audio, [and] video. I think we’re at the beginning of it, and we’ll see much more in the next few years.”

At Ars, we’ve written at length in recent days about the fears around AI investment. But to Delangue’s point, almost all of those discussions are about companies whose chief product is large language models, or the data centers meant to drive those—specifically, those focused on general-purpose chatbots that are meant to be everything for everybody.

Read full article

Comments

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data

Integration of Copilot Actions into Windows is off by default, but for how long?

Microsoft’s warning on Tuesday that an experimental AI Agent integrated into Windows can infect devices and pilfer sensitive user data has set off a familiar response from security-minded critics: Why is Big Tech so intent on pushing new features before their dangerous behaviors can be fully understood and contained?

As reported Tuesday, Microsoft introduced Copilot Actions, a new set of “experimental agentic features” that, when enabled, perform “everyday tasks like organizing files, scheduling meetings, or sending emails,” and provide “an active digital collaborator that can carry out complex tasks for you to enhance efficiency and productivity.”

Hallucinations and prompt injections apply

The fanfare, however, came with a significant caveat. Microsoft recommended users enable Copilot Actions only “if you understand the security implications outlined.”

Read full article

Comments

DeepMind’s latest: An AI for handling mathematical proofs

AlphaProof can handle math challenges but needs a bit of help right now.

Computers are extremely good with numbers, but they haven’t gotten many human mathematicians fired. Until recently, they could barely hold their own in high school-level math competitions.

But now Google’s DeepMind team has built AlphaProof, an AI system that matched silver medalists’ performance at the 2024 International Mathematical Olympiad, scoring just one point short of gold at the most prestigious undergrad math competition in the world. And that’s kind of a big deal.

True understanding

The reason computers fared poorly in math competitions is that, while they far surpass humanity’s ability to perform calculations, they are not really that good at the logic and reasoning that is needed for advanced math. Put differently, they are good at performing calculations really quickly, but they usually suck at understanding why they’re doing them. While something like addition seems simple, humans can do semi-formal proofs based on definitions of addition or go for fully formal Peano arithmetic that defines the properties of natural numbers and operations like addition through axioms.

Read full article

Comments

How Louvre thieves exploited human psychology to avoid suspicion—and what it reveals about AI

For humans and AI, when something fits the category of “ordinary,” it slips from notice.

On a sunny morning on October 19 2025, four men allegedly walked into the world’s most-visited museum and left, minutes later, with crown jewels worth 88 million euros ($101 million). The theft from Paris’ Louvre Museum—one of the world’s most surveilled cultural institutions—took just under eight minutes.

Visitors kept browsing. Security didn’t react (until alarms were triggered). The men disappeared into the city’s traffic before anyone realized what had happened.

Investigators later revealed that the thieves wore hi-vis vests, disguising themselves as construction workers. They arrived with a furniture lift, a common sight in Paris’s narrow streets, and used it to reach a balcony overlooking the Seine. Dressed as workers, they looked as if they belonged.

This strategy worked because we don’t see the world objectively. We see it through categories—through what we expect to see. The thieves understood the social categories that we perceive as “normal” and exploited them to avoid suspicion. Many artificial intelligence (AI) systems work in the same way and are vulnerable to the same kinds of mistakes as a result.

The sociologist Erving Goffman would describe what happened at the Louvre using his concept of the presentation of self: people “perform” social roles by adopting the cues others expect. Here, the performance of normality became the perfect camouflage.

The sociology of sight

Humans carry out mental categorization all the time to make sense of people and places. When something fits the category of “ordinary,” it slips from notice.

AI systems used for tasks such as facial recognition and detecting suspicious activity in a public area operate in a similar way. For humans, categorization is cultural. For AI, it is mathematical.

But both systems rely on learned patterns rather than objective reality. Because AI learns from data about who looks “normal” and who looks “suspicious,” it absorbs the categories embedded in its training data. And this makes it susceptible to bias.

The Louvre robbers weren’t seen as dangerous because they fit a trusted category. In AI, the same process can have the opposite effect: people who don’t fit the statistical norm become more visible and over-scrutinized.

It can mean a facial recognition system disproportionately flags certain racial or gendered groups as potential threats while letting others pass unnoticed.

A sociological lens helps us see that these aren’t separate issues. AI doesn’t invent its categories; it learns ours. When a computer vision system is trained on security footage where “normal” is defined by particular bodies, clothing, or behavior, it reproduces those assumptions.

Just as the museum’s guards looked past the thieves because they appeared to belong, AI can look past certain patterns while overreacting to others.

Categorization, whether human or algorithmic, is a double-edged sword. It helps us process information quickly, but it also encodes our cultural assumptions. Both people and machines rely on pattern recognition, which is an efficient but imperfect strategy.

A sociological view of AI treats algorithms as mirrors: They reflect back our social categories and hierarchies. In the Louvre case, the mirror is turned toward us. The robbers succeeded not because they were invisible, but because they were seen through the lens of normality. In AI terms, they passed the classification test.

From museum halls to machine learning

This link between perception and categorization reveals something important about our increasingly algorithmic world. Whether it’s a guard deciding who looks suspicious or an AI deciding who looks like a “shoplifter,” the underlying process is the same: assigning people to categories based on cues that feel objective but are culturally learned.

When an AI system is described as “biased,” this often means that it reflects those social categories too faithfully. The Louvre heist reminds us that these categories don’t just shape our attitudes, they shape what gets noticed at all.

After the theft, France’s culture minister promised new cameras and tighter security. But no matter how advanced those systems become, they will still rely on categorization. Someone, or something, must decide what counts as “suspicious behavior.” If that decision rests on assumptions, the same blind spots will persist.

The Louvre robbery will be remembered as one of Europe’s most spectacular museum thefts. The thieves succeeded because they mastered the sociology of appearance: They understood the categories of normality and used them as tools.

And in doing so, they showed how both people and machines can mistake conformity for safety. Their success in broad daylight wasn’t only a triumph of planning. It was a triumph of categorical thinking, the same logic that underlies both human perception and artificial intelligence.

The lesson is clear: Before we teach machines to see better, we must first learn to question how we see.

Vincent Charles, Reader in AI for Business and Management Science, Queen’s University Belfast and Tatiana Gherman, Associate Professor of AI for Business and Strategy, University of Northampton.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read full article

Comments

Tech giants pour billions into Anthropic as circular AI investments roll on

ChatGPT competitor secures billions from Microsoft and Nvidia in deal to use cloud services and chips.

On Tuesday, Microsoft and Nvidia announced plans to invest in Anthropic under a new partnership that includes a $30 billion commitment by the Claude maker to use Microsoft’s cloud services. Nvidia will commit up to $10 billion to Anthropic and Microsoft up to $5 billion, with both companies investing in Anthropic’s next funding round.

The deal brings together two companies that have backed OpenAI and connects them more closely to one of the ChatGPT maker’s main competitors. Microsoft CEO Satya Nadella said in a video that OpenAI “remains a critical partner,” while adding that the companies will increasingly be customers of each other.

“We will use Anthropic models, they will use our infrastructure, and we’ll go to market together,” Nadella said.

Read full article

Comments