SOM: Wolkenkratzer sollen Energiespeicher werden

Der Strom aus Wind- oder Solarkraftwerken muss irgendwo gespeichert werden. Das US-Architekturbüro SOM will unter anderem Gebäude in Speicher verwandeln. (Energiewende, Solarenergie)

Der Strom aus Wind- oder Solarkraftwerken muss irgendwo gespeichert werden. Das US-Architekturbüro SOM will unter anderem Gebäude in Speicher verwandeln. (Energiewende, Solarenergie)

iOS: Apple nennt offiziellen Update-Zeitraum für iPhone 15

Apple unterstützt seine iPhones seit Langem für mehrere Jahre – laut einem Dokument bekommt das iPhone 15 Pro Max mindestens fünf Jahre lang neue Software. (iPhone 15, Apple)

Apple unterstützt seine iPhones seit Langem für mehrere Jahre - laut einem Dokument bekommt das iPhone 15 Pro Max mindestens fünf Jahre lang neue Software. (iPhone 15, Apple)

Can a technology called RAG keep AI models from making stuff up?

The framework pulls in external sources to enhance accuracy. Does it live up to the hype?

Can a technology called RAG keep AI models from making stuff up?

Enlarge (credit: Aurich Lawson | Getty Images)

We’ve been living through the generative AI boom for nearly a year and a half now, following the late 2022 release of OpenAI’s ChatGPT. But despite transformative effects on companies’ share prices, generative AI tools powered by large language models (LLMs) still have major drawbacks that have kept them from being as useful as many would like them to be. Retrieval augmented generation, or RAG, aims to fix some of those drawbacks.

Perhaps the most prominent drawback of LLMs is their tendency toward confabulation (also called “hallucination”), which is a creative gap-filling technique AI language models use when they encounter holes in their knowledge that weren’t present in their training data. They generate plausible-sounding text that can veer toward accuracy when the training data is solid but otherwise may just be completely made up.

Relying on confabulating AI models gets people and companies in trouble, as we’ve covered in the past. In 2023, we saw two instances of lawyers citing legal cases, confabulated by AI, that didn’t exist. We’ve covered claims against OpenAI in which ChatGPT confabulated and accused innocent people of doing terrible things. In February, we wrote about Air Canada’s customer service chatbot inventing a refund policy, and in March, a New York City chatbot was caught confabulating city regulations.

Read 30 remaining paragraphs | Comments

Can a technology called RAG keep AI models from making stuff up?

The framework pulls in external sources to enhance accuracy. Does it live up to the hype?

Can a technology called RAG keep AI models from making stuff up?

Enlarge (credit: Aurich Lawson | Getty Images)

We’ve been living through the generative AI boom for nearly a year and a half now, following the late 2022 release of OpenAI’s ChatGPT. But despite transformative effects on companies’ share prices, generative AI tools powered by large language models (LLMs) still have major drawbacks that have kept them from being as useful as many would like them to be. Retrieval augmented generation, or RAG, aims to fix some of those drawbacks.

Perhaps the most prominent drawback of LLMs is their tendency toward confabulation (also called “hallucination”), which is a creative gap-filling technique AI language models use when they encounter holes in their knowledge that weren’t present in their training data. They generate plausible-sounding text that can veer toward accuracy when the training data is solid but otherwise may just be completely made up.

Relying on confabulating AI models gets people and companies in trouble, as we’ve covered in the past. In 2023, we saw two instances of lawyers citing legal cases, confabulated by AI, that didn’t exist. We’ve covered claims against OpenAI in which ChatGPT confabulated and accused innocent people of doing terrible things. In February, we wrote about Air Canada’s customer service chatbot inventing a refund policy, and in March, a New York City chatbot was caught confabulating city regulations.

Read 30 remaining paragraphs | Comments