Beyond Self-Talk: A Communication-Centric Survey of LLM-Based Multi-Agent Systems
- LLMAG

Large language model-based multi-agent systems have recently gained significant attention due to their potential for complex, collaborative, and intelligent problem-solving capabilities. Existing surveys typically categorize LLM-based multi-agent systems (LLM-MAS) according to their application domains or architectures, overlooking the central role of communication in coordinating agent behaviors and interactions. To address this gap, this paper presents a comprehensive survey of LLM-MAS from a communication-centric perspective. Specifically, we propose a structured framework that integrates system-level communication (architecture, goals, and protocols) with system internal communication (strategies, paradigms, objects, and content), enabling a detailed exploration of how agents interact, negotiate, and achieve collective intelligence. Through an extensive analysis of recent literature, we identify key components in multiple dimensions and summarize their strengths and limitations. In addition, we highlight current challenges, including communication efficiency, security vulnerabilities, inadequate benchmarking, and scalability issues, and outline promising future research directions. This review aims to help researchers and practitioners gain a clear understanding of the communication mechanisms in LLM-MAS, thereby facilitating the design and deployment of robust, scalable, and secure multi-agent systems.
View on arXiv@article{yan2025_2502.14321, title={ Beyond Self-Talk: A Communication-Centric Survey of LLM-Based Multi-Agent Systems }, author={ Bingyu Yan and Zhibo Zhou and Litian Zhang and Lian Zhang and Ziyi Zhou and Dezhuang Miao and Zhoujun Li and Chaozhuo Li and Xiaoming Zhang }, journal={arXiv preprint arXiv:2502.14321}, year={ 2025 } }