46
1

Debate Only When Necessary: Adaptive Multiagent Collaboration for Efficient LLM Reasoning

Abstract

Multiagent collaboration has emerged as a promising framework for enhancing the reasoning capabilities of large language models (LLMs). Despite improvements in reasoning, the approach introduces substantial computational overhead resulting from iterative agent interactions. Furthermore, engaging in unnecessary debates increases the risk of generating erroneous responses. To address these challenges, we propose Debate Only When Necessary (DOWN), an adaptive multiagent debate framework that selectively activates debate based on the confidence score of the agent's initial response. Debate is activated only for queries requiring further deliberation, during which agents refine their outputs by referencing peer responses and associated confidence scores. Evaluations on benchmarks show that DOWN improves efficiency by up to six times while preserving or even outperforming the performance of existing methods. Further analysis indicates that DOWN effectively mitigates the risk of error propagation stemming from the unnecessary debate process. These findings demonstrate the effectiveness of our approach in delivering high-performance LLM solutions at a lower computational cost.

View on arXiv
@article{eo2025_2504.05047,
  title={ Debate Only When Necessary: Adaptive Multiagent Collaboration for Efficient LLM Reasoning },
  author={ Sugyeong Eo and Hyeonseok Moon and Evelyn Hayoon Zi and Chanjun Park and Heuiseok Lim },
  journal={arXiv preprint arXiv:2504.05047},
  year={ 2025 }
}
Comments on this paper