Gemini Deep Research comes to Google Finance, backed by prediction market data

Deep Research and predictions based on Kalshi and Polymarket data are coming soon to Google Finance.

Google has announced new features in the popular Google Finance platform, and it leans heavily on Google’s tried-and-true strategy of more AI in more places. This builds on Google’s last Finance update, which added a Gemini-based chatbot. Now, Google is adding Gemini Deep Research to the site, which will allow users to ask much more complex questions. You can also ask questions about the future, backed by new betting market data sources.

The update, which is rolling out over the next several weeks, will add a Deep Research option to the Finance chatbot. The company claims that with the more powerful AI, users will be able to generate “fully cited” research reports on a given topic in just a few minutes. So you can expect an experience similar to Deep Research in the Gemini app—you give it a prompt, and then you come back later to see the result.

You probably won’t want to bother with Deep Research on simple queries—there are faster, easier ways to get that done. Google suggests using Deep Research on more complex things, like the doozy below.

Read full article

Comments

Bombshell report exposes how Meta relied on scam ad profits to fund AI

Meta goosed its revenue by targeting users likely to click on scam ads, docs show.

Internal documents have revealed that Meta has projected it earns billions from ignoring scam ads that its platforms then targeted to users most likely to click on them.

In a lengthy report, Reuters exposed five years of Meta practices and failures that allowed scammers to take advantage of users of Facebook, Instagram, and WhatsApp.

Documents showed that internally, Meta was hesitant to abruptly remove accounts, even those considered some of the “scammiest scammers,” out of concern that a drop in revenue could diminish resources needed for artificial intelligence growth.

Read full article

Comments

Google plans secret AI military outpost on tiny island overrun by crabs

Christmas Island facility would support naval surveillance in strategic Indo-Pacific waters.

On Wednesday, Reuters reported that Google is planning to build a large AI data center on Christmas Island, a 52-square-mile Australian territory in the Indian Ocean, following a cloud computing deal with Australia’s military. The previously undisclosed project will reportedly position advanced AI infrastructure a mere 220 miles south of Indonesia at a location military strategists consider critical for monitoring Chinese naval activity.

Aside from its strategic military position, the island is famous for its massive annual crab migration, where over 100 million of red crabs make their way across the island to spawn in the ocean. That’s notable because the tech giant has applied for environmental approvals to build a subsea cable connecting the 135-square-kilometer island to Darwin, where US Marines are stationed for six months each year.

The project follows a three-year cloud agreement Google signed with Australia’s military in July 2025, but many details about the new facility’s size, cost, and specific capabilities remain “secret,” according to Reuters. Both Google and Australia’s Department of Defense declined to comment when contacted by the news agency.

Read full article

Comments

5 AI-developed malware families analyzed by Google fail to work and are easily detected

You wouldn’t know it from the hype, but the results fail to impress.

Google on Wednesday revealed five recent malware samples that were built using generative AI. The end results of each one were far below par with professional malware development, a finding that shows that vibe coding of malicious wares lags behind more traditional forms of development, which means it still has a long way to go before it poses a real-world threat.

One of the samples, for instance, tracked under the name PromptLock, was part of an academic study analyzing how effective the use of large language models can be “to autonomously plan, adapt, and execute the ransomware attack lifecycle.” The researchers, however, reported the malware had “clear limitations: it omits persistence, lateral movement, and advanced evasion tactics” and served as little more than a demonstration of the feasibility of AI for such purposes. Prior to the paper’s release, security firm ESET said it had discovered the sample and hailed it as “the first AI-powered ransomware.”

Don’t believe the hype

Like the other four samples Google analyzed—FruitShell, PromptFlux, PromptSteal, and QuietVault—PromptLock was easy to detect, even by less-sophisticated endpoint protections that rely on static signatures. All samples also employed previously seen methods in malware samples, making them easy to counteract. They also had no operational impact, meaning they didn’t require defenders to adopt new defenses.

Read full article

Comments

If you want to satiate AI’s hunger for power, Google suggests going to space

Google engineers think they already have all the pieces needed to build a data center in orbit.

It was probably always when, not if, Google would add its name to the list of companies intrigued by the potential of orbiting data centers.

Google announced Tuesday a new initiative, named Project Suncatcher, to examine the feasibility of bringing artificial intelligence to space. The idea is to deploy swarms of satellites in low-Earth orbit, each carrying Google’s AI accelerator chips designed for training, content generation, synthetic speech and vision, and predictive modeling. Google calls these chips Tensor Processing Units, or TPUs.

“Project Suncatcher is a moonshot exploring a new frontier: equipping solar-powered satellite constellations with TPUs and free-space optical links to one day scale machine learning compute in space,” Google wrote in a blog post.

Read full article

Comments

So long, Assistant—Gemini is taking over Google Maps

Gemini is rolling out to Maps on Android and iOS, with Android Auto coming soon.

Google is in the process of purging Assistant across its products, and the next target is Google Maps. Starting today, Gemini will begin rolling out in Maps, powering new experiences for navigation, location info, and more. This update will eventually completely usurp Google Assistant’s hands-free role in Maps, but the rollout will take time. So for now, the smart assistant in Google Maps will still depend on how you’re running the app.

Across all Gemini’s incarnations, Google stresses its conversational abilities. Whereas Assistant was hard-pressed to keep one or two balls in the air, you can theoretically give Gemini much more complex instructions. Google’s demo includes someone asking for nearby restaurants with cheap vegan food, but instead of just providing a list, it suggests something based on the user’s input. Gemini can also offer more information about the location.

Maps will also get its own Gemini-infused version of Lens for after you park. You will be able to point the camera at a landmark, restaurant, or other business to get instant answers to your questions. This experience will be distinct from the version of Lens available in the Google app, focused on giving you location-based information. Maybe you want to know about the menu at a restaurant or what it’s like inside. Sure, you could open the door… but AI!

Read full article

Comments

Google’s new hurricane model was breathtakingly good this season

Meanwhile, the US Global Forecasting System continues to get worse.

The Atlantic hurricane season is drawing to a close, and with the tropics quieting down for a winter slumber, the focus of forecasters turns to evaluating what worked and what did not during the preceding season.

This year, the answers are clear. Although Google DeepMind’s Weather Lab only started releasing cyclone track forecasts in June, the company’s AI forecasting service performed exceptionally well. By contrast, the Global Forecast System model, operated by the US National Weather Service and is based on traditional physics and runs on powerful supercomputers, performed abysmally.

The official data comparing forecast model performance will not be published by the National Hurricane Center for a few months. However, Brian McNoldy, a senior researcher at the University of Miami, has already done some preliminary number crunching.

Read full article

Comments

Meet Project Suncatcher, Google’s plan to put AI data centers in space

Google is already zapping TPUs with radiation to get ready.

The tech industry is on a tear, building data centers for AI as quickly as they can buy up the land. The sky-high energy costs and logistical headaches of managing all those data centers have prompted interest in space-based infrastructure. Moguls like Jeff Bezos and Elon Musk have mused about putting GPUs in space, and now Google confirms it’s working on its own version of the technology. The company’s latest “moonshot” is known as Project Suncatcher, and if all goes as planned, Google hopes it will lead to scalable networks of orbiting TPUs.

The space around Earth has changed a lot in the last few years. A new generation of satellite constellations like Starlink has shown it’s feasible to relay Internet communication via orbital systems. Deploying high-performance AI accelerators in space along similar lines would be a boon to the industry’s never-ending build-out. Google notes that space may be “the best place to scale AI compute.”

Google’s vision for scalable orbiting data centers relies on solar-powered satellites with free-space optical links connecting the nodes into a distributed network. Naturally, there are numerous engineering challenges to solve before Project Suncatcher is real. As a reference, Google points to the long road from its first moonshot self-driving cars 15 years ago to the Waymo vehicles that are almost fully autonomous today.

Read full article

Comments

LLMs show a “highly unreliable” capacity to describe their own internal processes

Anthropic finds some LLM “self-awareness,” but “failures of introspection remain the norm.”

If you ask an LLM to explain its own reasoning process, it may well simply confabulate a plausible-sounding explanation for its actions based on text found in its training data. To get around this problem, Anthropic is expanding on its previous research into AI interpretability with a new study that aims to measure LLMs’ actual so-called “introspective awareness” of their own inference processes.

The full paper on “Emergent Introspective Awareness in Large Language Models” uses some interesting methods to separate out the metaphorical “thought process” represented by an LLM’s artificial neurons from simple text output that purports to represent that process. In the end, though, the research finds that current AI models are “highly unreliable” at describing their own inner workings and that “failures of introspection remain the norm.”

Inception, but for AI

Anthropic’s new research is centered on a process it calls “concept injection.” The method starts by comparing the model’s internal activation states following both a control prompt and an experimental prompt (e.g. an “ALL CAPS” prompt versus the same prompt in lower case). Calculating the differences between those activations across billions of internal neurons creates what Anthropic calls a “vector” that in some sense represents how that concept is modeled in the LLM’s internal state.

Read full article

Comments

Google removes Gemma models from AI Studio after GOP senator’s complaint

Sen. Marsha Blackburn says Gemma concocted sexual misconduct allegations against her.

You may be disappointed if you go looking for Google’s open Gemma AI model in AI Studio today. Google announced late on Friday that it was pulling Gemma from the platform, but it was vague about the reasoning. The abrupt change appears to be tied to a letter from Sen. Marsha Blackburn (R-Tenn.), who claims the Gemma model generated false accusations of sexual misconduct against her.

Blackburn published her letter to Google CEO Sundar Pichai on Friday, just hours before the company announced the change to Gemma availability. She demanded Google explain how the model could fail in this way, tying the situation to ongoing hearings that accuse Google and others of creating bots that defame conservatives.

At the hearing, Google’s Markham Erickson explained that AI hallucinations are a widespread and known issue in generative AI, and Google does the best it can to mitigate the impact of such mistakes. Although no AI firm has managed to eliminate hallucinations, Google’s Gemini for Home has been particularly hallucination-happy in our testing.

Read full article

Comments