5
0

PeerGuard: Defending Multi-Agent Systems Against Backdoor Attacks Through Mutual Reasoning

Abstract

Multi-agent systems leverage advanced AI models as autonomous agents that interact, cooperate, or compete to complete complex tasks across applications such as robotics and traffic management. Despite their growing importance, safety in multi-agent systems remains largely underexplored, with most research focusing on single AI models rather than interacting agents. This work investigates backdoor vulnerabilities in multi-agent systems and proposes a defense mechanism based on agent interactions. By leveraging reasoning abilities, each agent evaluates responses from others to detect illogical reasoning processes, which indicate poisoned agents. Experiments on LLM-based multi-agent systems, including ChatGPT series and Llama 3, demonstrate the effectiveness of the proposed method, achieving high accuracy in identifying poisoned agents while minimizing false positives on clean agents. We believe this work provides insights into multi-agent system safety and contributes to the development of robust, trustworthy AI interactions.

View on arXiv
@article{fan2025_2505.11642,
  title={ PeerGuard: Defending Multi-Agent Systems Against Backdoor Attacks Through Mutual Reasoning },
  author={ Falong Fan and Xi Li },
  journal={arXiv preprint arXiv:2505.11642},
  year={ 2025 }
}
Comments on this paper