ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.20140
27
2

LLM-Consensus: Multi-Agent Debate for Visual Misinformation Detection

26 October 2024
Kumud Lakara
Juil Sock
Christian Rupprecht
Philip H. S. Torr
John Collomosse
Christian Schroeder de Witt
Christian Schroeder de Witt
ArXivPDFHTML
Abstract

One of the most challenging forms of misinformation involves the out-of-context (OOC) use of images paired with misleading text, creating false narratives. Existing AI-driven detection systems lack explainability and require expensive finetuning. We address these issues with LLM-Consensus, a multi-agent debate system for OOC misinformation detection. LLM-Consensus introduces a novel multi-agent debate framework where multimodal agents collaborate to assess contextual consistency and request external information to enhance cross-context reasoning and decision-making. Our framework enables explainable detection with state-of-the-art accuracy even without domain-specific fine-tuning. Extensive ablation studies confirm that external retrieval significantly improves detection accuracy, and user studies demonstrate that LLM-Consensus boosts performance for both experts and non-experts. These results position LLM-Consensus as a powerful tool for autonomous and citizen intelligence applications.

View on arXiv
@article{lakara2025_2410.20140,
  title={ LLM-Consensus: Multi-Agent Debate for Visual Misinformation Detection },
  author={ Kumud Lakara and Georgia Channing and Juil Sock and Christian Rupprecht and Philip Torr and John Collomosse and Christian Schroeder de Witt },
  journal={arXiv preprint arXiv:2410.20140},
  year={ 2025 }
}
Comments on this paper