If you ask an LLM to explain its own reasoning process, it may well simply confabulate a plausible-sounding explanation for its actions based on text found in its training data. To get around this problem, Anthropic is expanding on its previous research into AI interpretability with a new study that aims to measure LLMs’ actual so-called “introspective awareness” of their own inference processes.
The full paper on “Emergent Introspective Awareness in Large Language Models” uses some interesting methods to separate out the metaphorical “thought process” represented by an LLM’s artificial neurons from simple text output that purports to represent that process. In the end, though, the research finds that current AI models are “highly unreliable” at describing their own inner workings and that “failures of introspection remain the norm.”
Inception, but for AI
Anthropic’s new research is centered on a process it calls “concept injection.” The method starts by comparing the model’s internal activation states following both a control prompt and an experimental prompt (e.g. an “ALL CAPS” prompt versus the same prompt in lower case). Calculating the differences between those activations across billions of internal neurons creates what Anthropic calls a “vector” that in some sense represents how that concept is modeled in the LLM’s internal state.
Sam Altman stell sich Fragen zur Wirtschaftlichkeit von OpenAI und erhält Unterstützung von Microsofts CEO Satya Nadella. (
Der Sternenhimmel-Projektor von Pococo erzeugt Aurora-Lichteffekte, dient als Einschlafhilfe und kommt mit zwei Discs und Timerfunktion. (
Die Melt Mouse erinnert nicht zufällig an Apples Magic Mouse. Sie ist mit Gesten nutzbar, kann aber im Betrieb geladen werden. (
Das Lenovo Tab M11 Android-Tablet in der 8+128-GByte-Variante ist bei Amazon 50 Euro reduziert. So günstig ist es selten. (