Emergent Introspective Awareness in Large Language Models

Best AI papers explained - Un pódcast de Enoch H. Kang

Podcast artwork

Categorías:

This research by anthropic investigates the existence of **functional introspective awareness** in large language models (LLMs), specifically focusing on Anthropic's Claude models. The core methodology involves using **concept injection**, where researchers manipulate a model's internal activations with representations of specific concepts to see if the model can accurately **report on these altered internal states**. Experiments demonstrate that models can, at times, notice injected "thoughts," distinguish these internal representations from text inputs, detect when pre-filled outputs were unintentional by referring to prior intentions, and even **modulate their internal states** when instructed to "think about" a concept. The findings indicate that while this introspective capacity is often **unreliable and context-dependent**, the most capable models, such as Claude Opus 4 and 4.1, exhibit the strongest signs of this ability, suggesting it may emerge with increased model sophistication.

Visit the podcast's native language site