RedDebate: Safer Responses through Multi-Agent Red Teaming Debates
- AAMLLLMAG

We propose RedDebate, a novel multi-agent debate framework that leverages adversarial argumentation among Large Language Models (LLMs) to proactively identify and mitigate their own unsafe behaviours. Existing AI safety methods often depend heavily on costly human evaluations or isolated single-model assessment, both subject to scalability constraints and oversight risks. RedDebate instead embraces collaborative disagreement, enabling multiple LLMs to critically examine one another's reasoning, and systematically uncovering unsafe blind spots through automated red-teaming, and iteratively improve their responses. We further integrate distinct types of long-term memory that retain learned safety insights from debate interactions. Evaluating on established safety benchmarks such as HarmBench, we demonstrate the proposed method's effectiveness. Debate alone can reduce unsafe behaviours by 17.7%, and when combined with long-term memory modules, achieves reductions exceeding 23.5%. To our knowledge, RedDebate constitutes the first fully automated framework that combines multi-agent debates with red-teaming to progressively enhance AI safety without direct human intervention.(Github Repository:this https URL)
View on arXiv@article{asad2025_2506.11083, title={ RedDebate: Safer Responses through Multi-Agent Red Teaming Debates }, author={ Ali Asad and Stephen Obadinma and Radin Shayanfar and Xiaodan Zhu }, journal={arXiv preprint arXiv:2506.11083}, year={ 2025 } }