58
0

Optimizing LLM-Based Multi-Agent System with Textual Feedback: A Case Study on Software Development

Main:7 Pages
11 Figures
Bibliography:4 Pages
5 Tables
Appendix:10 Pages
Abstract

We have seen remarkable progress in large language models (LLMs) empowered multi-agent systems solving complex tasks necessitating cooperation among experts with diverse skills. However, optimizing LLM-based multi-agent systems remains challenging. In this work, we perform an empirical case study on group optimization of role-based multi-agent systems utilizing natural language feedback for challenging software development tasks under various evaluation dimensions. We propose a two-step agent prompts optimization pipeline: identifying underperforming agents with their failure explanations utilizing textual feedback and then optimizing system prompts of identified agents utilizing failure explanations. We then study the impact of various optimization settings on system performance with two comparison groups: online against offline optimization and individual against group optimization. For group optimization, we study two prompting strategies: one-pass and multi-pass prompting optimizations. Overall, we demonstrate the effectiveness of our optimization method for role-based multi-agent systems tackling software development tasks evaluated on diverse evaluation dimensions, and we investigate the impact of diverse optimization settings on group behaviors of the multi-agent systems to provide practical insights for future development.

View on arXiv
@article{shen2025_2505.16086,
  title={ Optimizing LLM-Based Multi-Agent System with Textual Feedback: A Case Study on Software Development },
  author={ Ming Shen and Raphael Shu and Anurag Pratik and James Gung and Yubin Ge and Monica Sunkara and Yi Zhang },
  journal={arXiv preprint arXiv:2505.16086},
  year={ 2025 }
}
Comments on this paper