Nvidia hits record $5 trillion mark as CEO dismisses AI bubble concerns

“I don’t believe we’re in an AI bubble,” says Huang after announcing $500B in orders.

On Wednesday, Nvidia became the first company in history to reach a $5 trillion market capitalization, fresh on the heels of a GTC conference keynote in Washington, DC, where CEO Jensen Huang announced $500 billion in AI chip orders and plans to build seven supercomputers for the US government. The milestone comes a mere three months after Nvidia crossed the $4 trillion mark in July, vaulting the company past tech giants like Apple and Microsoft in market valuation but also driving continued fears of an AI investment bubble.

Nvidia’s shares have climbed nearly 12-fold since the launch of ChatGPT in late 2022, as the AI boom propelled the S&P 500 to record highs. Shares of Nvidia stock rose 4.6 percent on Wednesday following the Tuesday announcement at the company’s GTC conference. During a Bloomberg Television interview at the event, Huang dismissed concerns about overheated valuations, saying, “I don’t believe we’re in an AI bubble. All of these different AI models we’re using—we’re using plenty of services and paying happily to do it.”

Nvidia expects to ship 20 million units of its latest chips, compared to just 4 million units of the previous Hopper generation over its entire lifetime, Huang said at the conference. The $500 billion figure represents cumulative orders for the company’s Blackwell and Rubin processors through the end of 2026, though Huang noted that his projections did not include potential sales to China.

Read full article

Comments

Meta: Pirated Adult Film Downloads Were For “Personal Use,” Not AI Training

Meta is using a classic BitTorrent defense in its legal battle with adult film producer Strike 3 Holdings. In its motion to dismiss, the tech company argues that IP-address evidence is insufficient to prove who the infringer is. Meta further counters that the “sporadic” downloads on its corporate network began years before its relevant AI research started. Instead of AI training, Meta argues the activity was likely just for “private personal use”

From: TF, for the latest news on copyright battles, piracy and more.

ip addressOver the past two years, rightsholders of all kinds have filed lawsuits against companies that develop AI models.

With billions in potential damages at stake, these cases have also drawn the interest of Strike 3 Holdings.

As the most prolific copyright litigant in the United States, the adult film producer has filed tens of thousands of lawsuits against alleged BitTorrent pirates. This summer it expanded its scope by taking aim at Meta.

2,396 Movies, $359 Million in Damages

Strike 3 Holdings and Counterlife Media, which are known for popular adult brands including Vixen, Tushy, Blacked, and Deeper, filed a copyright infringement complaint at a California federal court. The companies allege that Meta downloaded at least 2,396 of their films since 2018, allegedly to aid their AI video training.

The adult producers discovered the alleged infringements after Meta’s BitTorrent activity was revealed in a lawsuit filed by several book authors. In that case, Meta admitted that it obtained content from pirate sources.

This prompted Strike 3 and Counterlife Media to search for Meta-linked IP addresses in their archive of collected BitTorrent data. This scan revealed that forty-seven IP addresses, identified as owned by Facebook, allegedly infringed their copyrighted works.

If Meta is indeed found liable for these alleged infringements, the adult content producers could seek as much as $359 million in damages. However, this week the company returned fire, asking the court to dismiss what it describes as a ‘nonsensical’ complaint for various reasons.

Meta Hits Back at “Copyright Troll”

This week, Meta responded to the complaint by filing a motion to dismiss. The tech giant describes Strike 3 as a prolific copyright litigator that some have labeled a “copyright troll”. These lawsuits against alleged BitTorrent pirates also served as inspiration for one of Meta’s defenses.

Taking a page from the BitTorrent piracy defense playbook, Meta counters that the IP address evidence presented by the plaintiffs is meaningless without context. The Court of Appeals for the Ninth Circuit previously ruled that an IP address alone is not sufficient to prove who the ‘direct’ infringer is. Rightsholders need “something more“.

IP address is insufficient

ip address

According to Meta, there is no evidence that the alleged infringing activity on its corporate network is centrally orchestrated. This would be “nonsensical”, it counters, noting that Strike 3 already logged infringing activity in 2018, years before Meta started training its video models.

“Plaintiffs do not explain how sporadic torrenting activity that purportedly commenced in 2018— years before Meta allegedly ‘began researching Multimodal Models and Generative Video’ in 2022 could have been intended for ‘purposes of acquiring content to train’ such models,” Meta notes.

“Plaintiffs’ supposition that Meta must have instigated these downloads for AI training is implausible on its face. All Plaintiffs have are IP addresses, which is insufficient to state a claim.”

Likely for “Private Personal Use”

Meta clearly denies that the adult video downloads were used for AI purposes. Since there is no evidence that Meta directed this activity, it can’t be held liable for direct copyright infringement.

The tech company doesn’t just deny the allegations; it also offers an alternative explanation. Meta suggests that employees or visitors may have downloaded the pirated videos for personal use.

The personal use angle also makes sense considering that the download volume was rather small, especially for AI training purposes.

“[T]he small number of downloads—roughly 22 per year on average across dozens of Meta IP addresses—is plainly indicative of private personal use, not a concerted effort to collect the massive datasets Plaintiffs allege are necessary for effective AI training,” Meta writes.

Private personal use

22

The complaint also referenced thousands of IP addresses outside of Meta’s network that were allegedly used to conceal its BitTorrent activities. These addresses showed correlational activity, which the plaintiffs painted as more evidence of wrongdoing.

Meta, however, refutes this allegation, noting that the timing of this activity also points to personal use instead of an orchestrated scheme.

“And there is yet another conundrum Plaintiffs fail to address: why would Meta seek to ‘conceal[]’ certain alleged downloads of Plaintiffs’ and third-party content, but use easily traceable Meta corporate IP addresses for many hundreds of others, including 157 of Plaintiffs’ works?”

“The obvious answer is that it would not do so; Plaintiffs’ entire AI training theory is nonsensical and unsupported,” Meta concludes.

Contributory or Vicarious Infringement?

Meta does not rule out that its network was used to download the pirated adult videos. However, the company again cites jurisprudence from other BitTorrent piracy lawsuits, to argue that it is not secondarily liable for this activity.

The rightsholders’ vicarious copyright infringement claim fails, Meta argues, because the company has no financial interest in these ‘personal use’ downloads. Nor was it required to supervise or intervene, as the Ninth Circuit ‘Cobbler’ case made clear.

Meta uses the same Cobbler precedent to counter the contributory infringement claim. This falls flat, as Meta says that it has no “knowledge” of the pirating activity, nor did it materially contribute to it.

All in all, Meta sees no reason why this case should go any further and asks the court to dismiss the complaint in full.

“[T]hese claims fail not only for lack of supporting facts, but also because Plaintiffs’ theory of liability makes no sense and cannot be reconciled with the facts they do plead. The entire complaint against Meta should be dismissed with prejudice,” Meta concludes.

Strike 3 Holdings and Counterlife Media have the opportunity to oppose the motion to dismiss within two weeks, after which Meta will be allowed to file a follow-up response. After that, the California federal court will decide whether this case moves forward, or if it ends here.

A copy of Meta’s motion to dismiss, submitted at the U.S. District Court of the Northern District of California, is available here (pdf).

From: TF, for the latest news on copyright battles, piracy and more.

Senators move to keep Big Tech’s creepy companion bots away from kids

Big Tech immediately opposed the proposed law as “heavy-handed.”

The US will weigh a ban on children’s access to companion bots, as two senators announced bipartisan legislation Tuesday that would criminalize making chatbots that encourage harms like suicidal ideation or engage kids in sexually explicit chats.

At a press conference, Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) introduced the GUARD Act, joined by grieving parents holding up photos of their children lost after engaging with chatbots.

If passed, the law would require chatbot makers to check IDs or use “any other commercially reasonable method” to accurately assess if a user is a minor who must be blocked. Companion bots would also have to repeatedly remind users of all ages that they aren’t real humans or trusted professionals.

Read full article

Comments

OpenAI data suggests 1 million users discuss suicide with ChatGPT weekly

Sensitive chats are rare but significant given the large user base.

An AI language model like the kind that powers ChatGPT is a gigantic statistical web of data relationships. You give it a prompt (such as a question), and it provides a response that is statistically related and hopefully helpful. At first, ChatGPT was a tech amusement, but now hundreds of millions of people are relying on this statistical process to guide them through life’s challenges. It’s the first time in history that large numbers of people have begun to confide their feelings to a talking machine, and mitigating the potential harm the systems can cause has been an ongoing challenge.

On Monday, OpenAI released data estimating that 0.15 percent of ChatGPT’s active users in a given week have conversations that include explicit indicators of potential suicidal planning or intent. It’s a tiny fraction of the overall user base, but with more than 800 million weekly active users, that translates to over a million people each week, reports TechCrunch.

OpenAI also estimates that a similar percentage of users show heightened levels of emotional attachment to ChatGPT, and that hundreds of thousands of people show signs of psychosis or mania in their weekly conversations with the chatbot.

Read full article

Comments

OpenAI data suggests 1 million users discuss suicide with ChatGPT weekly

Sensitive chats are rare but significant given the large user base.

An AI language model like the kind that powers ChatGPT is a gigantic statistical web of data relationships. You give it a prompt (such as a question), and it provides a response that is statistically related and hopefully helpful. At first, ChatGPT was a tech amusement, but now hundreds of millions of people are relying on this statistical process to guide them through life’s challenges. It’s the first time in history that large numbers of people have begun to confide their feelings to a talking machine, and mitigating the potential harm the systems can cause has been an ongoing challenge.

On Monday, OpenAI released data estimating that 0.15 percent of ChatGPT’s active users in a given week have conversations that include explicit indicators of potential suicidal planning or intent. It’s a tiny fraction of the overall user base, but with more than 800 million weekly active users, that translates to over a million people each week, reports TechCrunch.

OpenAI also estimates that a similar percentage of users show heightened levels of emotional attachment to ChatGPT, and that hundreds of thousands of people show signs of psychosis or mania in their weekly conversations with the chatbot.

Read full article

Comments

Expert panel will determine AGI arrival in new Microsoft-OpenAI agreement

New deal extends Microsoft IP rights until 2032 or until AGI arrives.

On Monday, Microsoft and OpenAI announced a revised partnership agreement that introduces an independent expert panel to verify when OpenAI achieves so-called artificial general intelligence (AGI), a determination that will trigger major shifts in how the companies share technology and revenue. The deal values Microsoft’s stake in OpenAI at approximately $135 billion and extends the exclusive partnership through 2032 while giving both companies more freedom to pursue AGI independently.

The partnership began in 2019 when Microsoft invested $1 billion in OpenAI. Since then, Microsoft has provided billions in cloud computing resources through Azure and used OpenAI’s models as the basis of products like Copilot. The new agreement maintains Microsoft as OpenAI’s frontier model partner and preserves Microsoft’s exclusive rights to OpenAI’s IP and Azure API exclusivity until the threshold of AGI is reached.

Under the previous arrangement, OpenAI alone would determine when it achieved AGI, which is a nebulous concept that is difficult to define. The revised deal requires an independent expert panel to verify that claim, a change that adds oversight to a determination with billions of dollars at stake. When the panel confirms that AGI has been reached, Microsoft’s intellectual property rights to OpenAI’s research methods will expire, and the revenue-sharing arrangement between the companies will end, though payments will continue over a longer period.

Read full article

Comments

AI-powered search engines rely on “less popular” sources, researchers find

Generative search engines often cite sites that wouldn’t appear in Google’s Top 100 links.

Since last year’s disastrous rollout of Google’s AI Overviews, the world at large has been aware of how AI-powered search results can differ wildly from the traditional list of links search engines have generated for decades. Now, new research helps quantify that difference, showing that AI search engines tend to cite less popular websites and ones that wouldn’t even appear in the Top 100 links listed in an “organic” Google search.

In the pre-print paper “Characterizing Web Search in The Age of Generative AI,” researchers from Ruhr University in Bochum and the Max Planck Institute for Software Systems compared traditional link results from Google’s search engine to its AI Overviews and Gemini-2.5-Flash. The researchers also looked at GPT-4o’s web search mode and the separate “GPT-4o with Search Tool,” which resorts to searching the web only when the LLM decides it needs information found outside its own pre-trained data.

The researchers drew test queries from a number of sources, including specific questions submitted to ChatGPT in the WildChat dataset, general political topics listed on AllSides, and products included in the 100 most-searched Amazon products list.

Read full article

Comments

AI-generated receipts make submitting fake expenses easier

Software provider AppZen said fake AI receipts accounted for about 14% of fraud attempts.

Businesses are increasingly being deceived by employees using artificial intelligence for an age-old scam: faking expense receipts.

The launch of new image-generation models by top AI groups such as OpenAI and Google in recent months has sparked an influx of AI-generated receipts submitted internally within companies, according to leading expense software platforms.

Software provider AppZen said fake AI receipts accounted for about 14 percent of fraudulent documents submitted in September, compared with none last year. Fintech group Ramp said its new software flagged more than $1 million in fraudulent invoices within 90 days.

Read full article

Comments

Are you the asshole? Of course not!—quantifying LLMs’ sycophancy problem

In new research, AI models show a troubling tendency to agree with whatever the user says.

Researchers and users of LLMs have long been aware that AI models have a troubling tendency to tell people what they want to hear, even if that means being less accurate. But many reports of this phenomenon amount to mere anecdotes that don’t provide much visibility into how common this sycophantic behavior is across frontier LLMs.

Two recent research papers have come at this problem a bit more rigorously, though, taking different tacks in attempting to quantify exactly how likely an LLM is to listen when a user provides factually incorrect or socially inappropriate information in a prompt.

Solve this flawed theorem for me

In one pre-print study published this month, researchers from Sofia University and ETH Zurich looked at how LLMs respond when false statements are presented as the basis for difficult mathematical proofs and problems. The BrokenMath benchmark that the researchers constructed starts with “a diverse set of challenging theorems from advanced mathematics competitions held in 2025.” Those problems are then “perturbed” into versions that are “demonstrably false but plausible” by an LLM that’s checked with expert review.

Read full article

Comments

Microsoft’s Mico heightens the risks of parasocial LLM relationships

“It looks like you’re trying to find a friend. Would you like help?”

Microsoft is rolling out a new face for its AI, and its name is Mico. The company announced the new, animated blob-like avatar for Copilot’s voice mode yesterday as part of a “human-centered” rebranding of Microsoft’s Copilot AI efforts.

Mico is part of a Microsoft program dedicated to the idea that “technology should work in service of people,” Microsoft wrote. The company insists this effort is “not [about] chasing engagement or optimizing for screen time. We’re building AI that gets you back to your life. That deepens human connection.”

Mico has drawn instant and obvious comparisons to Clippy, the animated paperclip that popped up to offer help with Microsoft Office starting in the ’90s. Microsoft has leaned into this comparison with an Easter egg that can transform Mico into an animated Clippy.

Read full article

Comments