MAS: Self-Generative, Self-Configuring, Self-Rectifying Multi-Agent Systems
- LLMAG

The past two years have witnessed the meteoric rise of Large Language Model (LLM)-powered multi-agent systems (MAS), which harness collective intelligence and exhibit a remarkable trajectory toward self-evolution. This paradigm has rapidly progressed from manually engineered systems that require bespoke configuration of prompts, tools, roles, and communication protocols toward frameworks capable of automated orchestration. Yet, dominant automatic multi-agent systems, whether generated by external modules or a single LLM agent, largely adhere to a rigid ``\textit{generate-once-and-deploy}'' paradigm, rendering the resulting systems brittle and ill-prepared for the dynamism and uncertainty of real-world environments. To transcend this limitation, we introduce MAS, a paradigm predicated on the principle of recursive self-generation: a multi-agent system that autonomously architects bespoke multi-agent systems for diverse problems. Technically, we devise a ``\textit{generator-implementer-rectifier}'' tri-agent team capable of dynamically composing and adaptively rectifying a target agent system in response to real-time task demands. Collaborative Tree Optimization is proposed to train and specialize these meta-agents. Extensive evaluation across seven benchmarks reveals that MAS achieves performance gains of up to over state-of-the-art MAS in complex scenarios such as deep research and code generation. Moreover, MAS exhibits superior cross-backbone generalization, effectively leveraging previously unseen LLMs to yield improvements of up to . Crucially, these gains are attained without incurring excessive token costs, as MAS consistently resides on the Pareto frontier of cost-performance trade-offs. The source codes are available atthis https URL.
View on arXiv