ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24239
36
0

An Adversary-Resistant Multi-Agent LLM System via Credibility Scoring

30 May 2025
Sana Ebrahimi
Mohsen Dehghankar
Abolfazl Asudeh
ArXiv (abs)PDFHTML
Main:7 Pages
9 Figures
Bibliography:3 Pages
5 Tables
Appendix:6 Pages
Abstract

While multi-agent LLM systems show strong capabilities in various domains, they are highly vulnerable to adversarial and low-performing agents. To resolve this issue, in this paper, we introduce a general and adversary-resistant multi-agent LLM framework based on credibility scoring. We model the collaborative query-answering process as an iterative game, where the agents communicate and contribute to a final system output. Our system associates a credibility score that is used when aggregating the team outputs. The credibility scores are learned gradually based on the past contributions of each agent in query answering. Our experiments across multiple tasks and settings demonstrate our system's effectiveness in mitigating adversarial influence and enhancing the resilience of multi-agent cooperation, even in the adversary-majority settings.

View on arXiv
@article{ebrahimi2025_2505.24239,
  title={ An Adversary-Resistant Multi-Agent LLM System via Credibility Scoring },
  author={ Sana Ebrahimi and Mohsen Dehghankar and Abolfazl Asudeh },
  journal={arXiv preprint arXiv:2505.24239},
  year={ 2025 }
}
Comments on this paper