Valve rejoins the VR hardware wars with standalone Steam Frame

SteamOS-powered headset sports semi-modular design, wireless “low-latency” PC streaming.

Six years ago, Valve made its second big virtual reality push, launching the Valve Index headset alongside VR blockbuster Half-Life Alyx. Since then, the company seems to have lost interest in virtual reality gaming, letting competitors like Meta release regular standalone hardware updates as the PC-tethered Index continued to age.

Now, after years of rumors, Valve is finally ready to officially rejoin the VR hardware race. The Steam Frame, set to launch in early 2026, will run both VR and traditional Steam games locally through SteamOS or stream them wirelessly from a local PC.

Powered by a Snapdragon 8 Gen 3 processor with 16 GB of RAM, the Steam Frame sports a 2160 x 2160 resolution display per eye at an “up to 110 degrees” field-of-view and up to 144 Hz. That’s all roughly in line with 2023’s Meta Quest 3, which runs on the slightly less performant Snapdragon XR2 Gen 2 processor. Valve’s new headset will be available in models sporting 256GB and 1TB or internal storage, both with the option for expansion via a microSD card slot. Pricing details have not yet been revealed publicly.

Read full article

Comments

Steam Deck minus the screen: Valve announces new Steam Machine, Controller hardware

SteamOS-powered cube for your TV targets early 2026 launch, no pricing details.

Nearly four years after the Steam Deck changed the world of portable gaming, Valve is getting ready to release SteamOS-powered hardware designed for the living room TV, or even as a desktop PC gaming replacement. The simply named Steam Machine and Steam Controller, both planned to ship in early 2026, are “optimized for gaming on Steam and designed for players to get even more out of their Steam Library,” Valve said in a press release.

A Steam Machine spec sheet shared by Valve lists a “semi-custom” six-core AMD Zen 4 CPU clocked at up to 4.8 Ghz alongside an AMD RDNA3 GPU with 28 compute units. The motherboard will include 16GB of DDR5 RAM and an additional 8GB of dedicated DDR6 VRAM for the GPU. The new hardware will come in two configurations with 512GB or 2TB of unspecified “SSD storage,” though Valve isn’t sharing pricing for either just yet.

If you squint, you can make out a few ports on this unmarked black square. Credit: Valve
A strip of LEDs adds a touch of color to the front face of the Steam Machine.
I'm a fan of the big fan. Credit: Valve

Those chips and numbers suggest the Steam Machine will have roughly the same horsepower as a mid-range desktop gaming PC from a few years back. But Valve says its “Machine”—which it ranks as “over 6x more powerful than the Steam Deck”—is powerful enough to support ray-tracing and/or 4K, 60 fps gaming using FSR upscaling.

Read full article

Comments

New project brings strong Linux compatibility to more classic Windows games

But author warns that Direct3D 7 “is a land of highly cursed API inter-operability.”

For years now, Valve has been slowly improving the capabilities of the Proton compatibility layer that lets thousands of Windows games work seamlessly on the Linux-based SteamOS. But Valve’s Windows-to-Linux compatibility layer generally only extends back to games written for Direct3D 8, the proprietary Windows graphics API Microsoft released in late 2000.

Now, a new open source project is seeking to extend Linux interoperability further back into PC gaming history. The d7vk project describes itself as “a Vulkan-based translation layer for Direct3D 7 [D3D7], which allows running 3D applications on Linux using Wine.”

More options are always welcome

The new project isn’t the first attempt to get Direct3D 7 games running on Linux. Wine‘s own built-in WineD3D compatibility layer has supported D3D7 in some form or another for at least two decades now. But the new d7vk project instead branches off the existing dxvk compatibility layer, which is already used by Valve’s Proton for SteamOS and which reportedly offers better performance than WineD3D on many games.

Read full article

Comments

With Skigill, the classic RPG skill tree becomes a crowded battlefield

Vampire Survivors-esque battler sets itself apart with great weapons, unique graphics.

If you’ve played any number of RPGs, you probably know the skill tree as a break from the game’s core action. It’s a place to pause, take a breather, and scroll through a massive visual menu of upgrade options, considering which path of stat and ability tweaks best fits your character and your play style.

With Skigill, indie developer Achromi has taken that break-time menu and transformed it into the playing field for an intriguing Vampire Survivors-style roguelike. And while the Early Access game currently lacks the kind of deep content that will keep players coming back for a long time, it’s still a clever and engaging take on the genre that I haven’t been able to put down for long.

Clear the way, I need +5 armor!

Like Vampire Survivors and its many imitators, Skigill is all about navigating through waves of enemies that converge somewhat mindlessly on your position. The game automatically aims and deploys weapons to carve some safe space through what can be screens full of hazardous enemies, which leave behind coins as they explode in puffs of yellow smoke.

Read full article

Comments

LLMs show a “highly unreliable” capacity to describe their own internal processes

Anthropic finds some LLM “self-awareness,” but “failures of introspection remain the norm.”

If you ask an LLM to explain its own reasoning process, it may well simply confabulate a plausible-sounding explanation for its actions based on text found in its training data. To get around this problem, Anthropic is expanding on its previous research into AI interpretability with a new study that aims to measure LLMs’ actual so-called “introspective awareness” of their own inference processes.

The full paper on “Emergent Introspective Awareness in Large Language Models” uses some interesting methods to separate out the metaphorical “thought process” represented by an LLM’s artificial neurons from simple text output that purports to represent that process. In the end, though, the research finds that current AI models are “highly unreliable” at describing their own inner workings and that “failures of introspection remain the norm.”

Inception, but for AI

Anthropic’s new research is centered on a process it calls “concept injection.” The method starts by comparing the model’s internal activation states following both a control prompt and an experimental prompt (e.g. an “ALL CAPS” prompt versus the same prompt in lower case). Calculating the differences between those activations across billions of internal neurons creates what Anthropic calls a “vector” that in some sense represents how that concept is modeled in the LLM’s internal state.

Read full article

Comments

AI-powered search engines rely on “less popular” sources, researchers find

Generative search engines often cite sites that wouldn’t appear in Google’s Top 100 links.

Since last year’s disastrous rollout of Google’s AI Overviews, the world at large has been aware of how AI-powered search results can differ wildly from the traditional list of links search engines have generated for decades. Now, new research helps quantify that difference, showing that AI search engines tend to cite less popular websites and ones that wouldn’t even appear in the Top 100 links listed in an “organic” Google search.

In the pre-print paper “Characterizing Web Search in The Age of Generative AI,” researchers from Ruhr University in Bochum and the Max Planck Institute for Software Systems compared traditional link results from Google’s search engine to its AI Overviews and Gemini-2.5-Flash. The researchers also looked at GPT-4o’s web search mode and the separate “GPT-4o with Search Tool,” which resorts to searching the web only when the LLM decides it needs information found outside its own pre-trained data.

The researchers drew test queries from a number of sources, including specific questions submitted to ChatGPT in the WildChat dataset, general political topics listed on AllSides, and products included in the 100 most-searched Amazon products list.

Read full article

Comments

Are you the asshole? Of course not!—quantifying LLMs’ sycophancy problem

In new research, AI models show a troubling tendency to agree with whatever the user says.

Researchers and users of LLMs have long been aware that AI models have a troubling tendency to tell people what they want to hear, even if that means being less accurate. But many reports of this phenomenon amount to mere anecdotes that don’t provide much visibility into how common this sycophantic behavior is across frontier LLMs.

Two recent research papers have come at this problem a bit more rigorously, though, taking different tacks in attempting to quantify exactly how likely an LLM is to listen when a user provides factually incorrect or socially inappropriate information in a prompt.

Solve this flawed theorem for me

In one pre-print study published this month, researchers from Sofia University and ETH Zurich looked at how LLMs respond when false statements are presented as the basis for difficult mathematical proofs and problems. The BrokenMath benchmark that the researchers constructed starts with “a diverse set of challenging theorems from advanced mathematics competitions held in 2025.” Those problems are then “perturbed” into versions that are “demonstrably false but plausible” by an LLM that’s checked with expert review.

Read full article

Comments

Microsoft’s Mico heightens the risks of parasocial LLM relationships

“It looks like you’re trying to find a friend. Would you like help?”

Microsoft is rolling out a new face for its AI, and its name is Mico. The company announced the new, animated blob-like avatar for Copilot’s voice mode yesterday as part of a “human-centered” rebranding of Microsoft’s Copilot AI efforts.

Mico is part of a Microsoft program dedicated to the idea that “technology should work in service of people,” Microsoft wrote. The company insists this effort is “not [about] chasing engagement or optimizing for screen time. We’re building AI that gets you back to your life. That deepens human connection.”

Mico has drawn instant and obvious comparisons to Clippy, the animated paperclip that popped up to offer help with Microsoft Office starting in the ’90s. Microsoft has leaned into this comparison with an Easter egg that can transform Mico into an animated Clippy.

Read full article

Comments

Researchers show that training on “junk data” can lead to LLM “brain rot”

Models trained on short, popular, and/or “superficial” tweets perform worse on benchmarks.

On the surface, it seems obvious that training an LLM with “high quality” data will lead to better performance than feeding it any old “low quality” junk you can find. Now, a group of researchers is attempting to quantify just how much this kind of low quality data can cause an LLM to experience effects akin to human “brain rot.”

For a pre-print paper published this month, the researchers from Texas A&M, the University of Texas, and Purdue University drew inspiration from existing research showing how humans who consume “large volumes of trivial and unchallenging online content” can develop problems with attention, memory, and social cognition. That led them to what they’re calling the “LLM brain rot hypothesis,” summed up as the idea that “continual pre-training on junk web text induces lasting cognitive decline in LLMs.”

Figuring out what counts as “junk web text” and what counts as “quality content” is far from a simple or fully objective process, of course. But the researchers used a few different metrics to tease a “junk dataset” and “control dataset” from HuggingFace’s corpus of 100 million tweets.

Read full article

Comments

Valve upends the CS2 item marketplace with new “trade up” update

Once rare $14K knife now sells for $7K, some common guns jump from $10 to over $100.

From the outside, Counter-Strike 2 looks a lot like a game that’s primarily about shooting people. For millions of players, though, the game is more about collecting and/or buying rare in-game loot and flipping it for what can be very significant sums on the Steam Marketplace.

Wednesday night, Valve sent that multi-billion-dollar market into turmoil as part of a so-called “small update.” Now, players can use the game’s “Trade Up contracts” to exchange five common, “Covert” items (also known as “reds”) for the kinds of knives and gloves that have until now been much harder to obtain.

That “small update” has unsurprisingly had an immediate and sharp impact on the Marketplace price for those items. One rare knife that sold for over $14,000 less than 24 hours ago has seen its minimum price plummet over 50 percent as of this writing, according to the trackers at Pricempire. Meanwhile, the median sale price for a common P90 Asimov gun on the Steam Marketplace shot up from $10 on Wednesday to well over $100 as of this writing.

Read full article

Comments