94
3

Multi-agent Architecture Search via Agentic Supernet

Abstract

Large Language Model (LLM)-empowered multi-agent systems extend the cognitive boundaries of individual agents through disciplined collaboration and interaction, while constructing these systems often requires labor-intensive manual designs. Despite the availability of methods to automate the design of agentic workflows, they typically seek to identify a static, complex, one-size-fits-all system, which, however, fails to dynamically allocate inference resources based on the difficulty and domain of each query. To address this challenge, we shift away from the pursuit of a monolithic agentic system, instead optimizing the \textbf{agentic supernet}, a probabilistic and continuous distribution of agentic architectures. We introduce MaAS, an automated framework that samples query-dependent agentic systems from the supernet, delivering high-quality solutions and tailored resource allocation (\textit{e.g.}, LLM calls, tool calls, token cost). Comprehensive evaluation across six benchmarks demonstrates that MaAS \textbf{(I)} requires only 645%6\sim45\% of the inference costs of existing handcrafted or automated multi-agent systems, \textbf{(II)} surpasses them by 0.54%11.82%0.54\%\sim11.82\%, and \textbf{(III)} enjoys superior cross-dataset and cross-LLM-backbone transferability.

View on arXiv
@article{zhang2025_2502.04180,
  title={ Multi-agent Architecture Search via Agentic Supernet },
  author={ Guibin Zhang and Luyang Niu and Junfeng Fang and Kun Wang and Lei Bai and Xiang Wang },
  journal={arXiv preprint arXiv:2502.04180},
  year={ 2025 }
}
Comments on this paper