Emergent coordination in multi-agent language models
Best AI papers explained - Un pódcast de Enoch H. Kang

Categorías:
This paper introduces an **information-theoretic framework** designed to determine when multi-agent Large Language Model (LLM) systems transition from simple aggregates to integrated, synergistic collectives. The research utilizes a **group guessing game without direct communication** to experimentally test how different prompt designs—specifically, a control condition, assigning agent **personas**, and adding a **Theory of Mind (ToM)** instruction—influence emergent coordination. Findings suggest that while all conditions show signs of **dynamic emergence capacity**, combining personas with the ToM prompt significantly improves **goal-directed synergy** and performance by fostering both identity-linked differentiation and collective alignment, mirroring principles of **collective intelligence in human groups**. The study applies various statistical and **information decomposition** methods, including the practical criterion and emergence capacity, to rigorously quantify and localize this emergent behavior across different LLMs like GPT-4.1 and Llama-3.1-8B.