The AI slop drops right from the top, as Trump posts vulgar deepfake of opponents

A sombrero and a fake mustache were also involved.

AI poses an obvious danger to millennia-long human fight to find the truth. Large language model "hallucinations," vocal deepfakes, and now increased use of video deepfakes have all had a blurring effect on facts, letting bad actors around the globe brush off even recorded events as mere "fake news."

The danger is perhaps most acute in the political realm, where deepfake audio and video can make any politician say or appear to do anything. In such a climate, our most senior elected officials have a special duty to model truth-seeking behavior and responsible AI use.

But what's the fun in that, when you can just blow up negotiations over a budget impasse by posting a deepfake video of your political opponents calling themselves "a bunch of woke pieces of shit" while mariachi music plays in the background? Oh—and did I mention the fake mustache? Or the CGI sombrero?

Read full article

Comments

In 2022, the world axed a disease name seen as racist. US just switched back.

The name was not only offensive, it was also inaccurate.

Without explanation, the US Centers for Disease Control and Prevention under the Trump administration has reverted from using the disease name "mpox" to the obsolete "monkeypox," which the world abandoned in 2022 because it was seen as racist and stigmatizing.

Mpox is the name of the disease caused by Orthopoxvirus monkeypox, a relative of smallpox and cowpox that has exploded to global prominence in recent years. In 2022 and 2024, the spread of mpox caused the World Health Organization to declare international public health emergencies.

Amid the attention, health officials became acutely aware of the problematic name.

Read full article

Comments

Alexa’s survival hinges on you buying more expensive Amazon devices

Echo speakers and displays for Alexa+ require more expensive components.

Amazon’s voice assistant is hanging on by a thread. And that thread is generative AI—or, in Amazon’s case, Alexa+.

Amazon hasn’t had a problem getting people to buy cheap, Alexa-powered gadgets. However, the Alexa in millions of homes today doesn’t make Amazon money. It’s largely used for simple tasks unrelated to commerce, like setting timers and checking the weather. As a result, Amazon’s Devices business has reportedly been siphoning money, and the clock is ticking for Alexa to prove its worth.

Alexa+, a subscription-based generative AI service ($20 per month or included with Prime, which starts at $15/month), is supposed to solve Amazon's woes with Alexa. More conversational and powerful than the original Alexa, Alexa+ is designed to play a more central role in user transactions, enabling, in theory, Amazon to finally make money from voice assistants after 11 years.

Read full article

Comments

Critics slam OpenAI’s parental controls while users rage, “Treat us like adults”

OpenAI still isn’t doing enough to protect teens, suicide prevention experts say.

As OpenAI tells it, the company has been consistently rolling out safety updates ever since parents, Matthew and Maria Raine, sued OpenAI, alleging that "ChatGPT killed my son."

On August 26, the day that the lawsuit was filed, OpenAI seemed to publicly respond to claims that ChatGPT acted as a "suicide coach" for 16-year-old Adam Raine by posting a blog promising to do better to help people "when they need it most."

By September 2, that meant routing all users' sensitive conversations to a reasoning model with stricter safeguards, sparking backlash from users who feel like ChatGPT is handling their prompts with kid gloves. Two weeks later, OpenAI announced it would start predicting users' ages to improve safety more broadly. Then, this week, OpenAI introduced parental controls for ChatGPT and its video generator Sora 2. Those controls allow parents to limit their teens' use and even get access to information about chat logs in "rare cases" where OpenAI's "system and trained reviewers detect possible signs of serious safety risk."

Read full article

Comments

Ubuntu Touch mobile Linux distro is now based on Ubuntu 24.04 LTS

Ubuntu Touch is a mobile operating system designed to run on smartphones, tablets, and other touchscreen devices. It was originally developed by Canonical, the company behind Ubuntu for desktop and server PCs. But Canonical abandoned the software years…

Ubuntu Touch is a mobile operating system designed to run on smartphones, tablets, and other touchscreen devices. It was originally developed by Canonical, the company behind Ubuntu for desktop and server PCs. But Canonical abandoned the software years ago, and since then it’s been under development by a small community of developers at UBports. All […]

The post Ubuntu Touch mobile Linux distro is now based on Ubuntu 24.04 LTS appeared first on Liliputing.

Researchers find a carbon-rich moon-forming disk around giant exoplanet

Lots of carbon molecules but little sign of water in a super-Jupiter’s disk.

Many of the most interesting bodies in our Solar System aren't planets, but the moons that orbit them. They have active volcanoes, hydrocarbon oceans, geysers, and moon-wide oceans buried under icy crusts. And, as far as we can tell, the physics of the processes that produce large planets should make moon formation inevitable. Given how common planets are, our galaxy should be teeming with moons.

Yet, despite some tantalizing hints, we've not found a clear indication of a moon orbiting an exoplanet. What we have found are a few very young exoplanets that appear to have moon-forming disks around them. Now, the James Webb Space Telescope has obtained a spectrum of the ring-forming disk around a giant super-Jupiter, and found that it's rich in small carbon-based molecules. That's despite the fact that the star it's orbiting seems to have a planet-forming disk that's mostly water.

Finding disks

We search for exo-moons and moon-forming disks using completely different methods. To spot an actual moon, we rely on its gravitational influence. At some points in its orbit, it will be towing its planet forward to speed up its orbit; at others, it will be holding its planet back. This introduces subtle variations in the timing of when the planet arrives in front of the star from Earth's perspective.

Read full article

Comments

How “prebunking” can restore public trust and other September highlights

The evolution of Taylor Swift’s dialect, a rare Einstein cross, neutrino laser beams, and more.

It's a regrettable reality that there is never enough time to cover all the interesting scientific stories we come across each month. In the past, we've featured year-end roundups of cool science stories we (almost) missed. This year, we're experimenting with a monthly collection. September's list includes how prebunking can restore public trust in election results; why ghost sharks grow weird forehead teeth; and using neutrinos to make a frickin' laser beam, among other highlights.

Prebunking increases trust in elections

Brazilian voting machine showing a man's hand pushing the submit button Credit: Superior Electoral Court of Brazil /Public domain

False claims of voter fraud abounded in the wake of the 2020 US general election, when Joe Biden defeated incumbent Donald Trump for the presidency. Trump himself amplified those false claims, culminating in the violent attack on the US Capitol building on January 6, 2021. Two years later, Brazil faced a similar scenario in the wake of its 2022 general election in which voters ousted incumbent President Jair Bolsonaro. Once again, claims of fraud ran rampant as Bolsonaro supporters stormed their country's capital.

Read full article

Comments

Intel and AMD trusted enclaves, the backbone of network security, fall to physical attacks

The chipmakers say physical attacks aren’t in the threat model. Many users didn’t get the memo.

In the age of cloud computing, protections baked into chips from Intel, AMD, and others are essential for ensuring confidential data and sensitive operations can’t be viewed or manipulated by attackers who manage to compromise servers running inside a data center. In many cases, these protections—which work by storing certain data and processes inside encrypted enclaves known as TEEs (Trusted Execution Enclaves)—are essential for safeguarding secrets stored in the cloud by the likes of Signal Messenger and WhatsApp. All major cloud providers recommend that customers use it. Intel calls its protection SGX, and AMD has named it SEV-SNP.

Over the years, researchers have repeatedly broken the security and privacy promises that Intel and AMD have made about their respective protections. On Tuesday, researchers independently published two papers laying out separate attacks that further demonstrate the limitations of SGX and SEV-SNP. One attack, dubbed Battering RAM, defeats both protections and allows attackers to not only view encrypted data but also to actively manipulate it to introduce software backdoors or to corrupt data. A separate attack known as Wiretap is able to passively decrypt sensitive data protected by SGX and remain invisible at all times.

Attacking deterministic encryption

Both attacks use a small piece of hardware, known as an interposer, that sits between CPU silicon and the memory module. Its position allows the interposer to observe data as it passes from one to the other. They exploit both Intel’s and AMD’s use of deterministic encryption, which produces the same ciphertext each time the same plaintext is encrypted with a given key. In SGX and SEV-SNP, that means the same plaintext written to the same memory address always produces the same ciphertext.

Read full article

Comments

DeepSeek tests “sparse attention” to slash AI processing costs

Chinese lab’s v3.2 release explores a technique that could make running AI far less costly.

Ever wonder why ChatGPT slows down during long conversations? The culprit is a fundamental mathematical challenge: Processing long sequences of text requires massive computational resources, even with the efficiency tricks that companies have already deployed. While US tech giants can afford to throw more hardware at the problem, Chinese AI company DeepSeek, which is cut off from a steady supply of some advanced AI chips by export restrictions, has extra motivation to squeeze more performance from less silicon.

On Monday, DeepSeek released an experimental version of its latest simulated reasoning language model, DeepSeek-V3.2-Exp, which introduces what it calls "DeepSeek Sparse Attention" (DSA). It's the company's implementation of a computational technique likely already used in some of the world's most prominent AI models. OpenAI pioneered sparse transformers in 2019 and used the technique to build GPT-3, while Google Research published work on "Reformer" models using similar concepts in 2020. (The full extent to which Western AI companies currently use sparse attention in their latest models remains undisclosed.)

Despite sparse attention being a known approach for years, DeepSeek claims its version achieves "fine-grained sparse attention for the first time" and has cut API prices by 50 percent to demonstrate the efficiency gains. But to understand more about what makes DeepSeek v3.2 notable, it's useful to refresh yourself on a little AI history.

Read full article

Comments

OpenAI: Sora 2 generiert Nutzer in KI-Videos hinein

Der Videogenerator Sora 2 von OpenAI erzeugt realistischere Physikeffekte und synchronen Sound. Nutzer können sich selbst in die Videos einfügen. (ChatGPT, Film)

Der Videogenerator Sora 2 von OpenAI erzeugt realistischere Physikeffekte und synchronen Sound. Nutzer können sich selbst in die Videos einfügen. (ChatGPT, Film)