17
0

CMIE: Combining MLLM Insights with External Evidence for Explainable Out-of-Context Misinformation Detection

Abstract

Multimodal large language models (MLLMs) have demonstrated impressive capabilities in visual reasoning and text generation. While previous studies have explored the application of MLLM for detecting out-of-context (OOC) misinformation, our empirical analysis reveals two persisting challenges of this paradigm. Evaluating the representative GPT-4o model on direct reasoning and evidence augmented reasoning, results indicate that MLLM struggle to capture the deeper relationships-specifically, cases in which the image and text are not directly connected but are associated through underlying semantic links. Moreover, noise in the evidence further impairs detection accuracy. To address these challenges, we propose CMIE, a novel OOC misinformation detection framework that incorporates a Coexistence Relationship Generation (CRG) strategy and an Association Scoring (AS) mechanism. CMIE identifies the underlying coexistence relationships between images and text, and selectively utilizes relevant evidence to enhance misinformation detection. Experimental results demonstrate that our approach outperforms existing methods.

View on arXiv
@article{li2025_2505.23449,
  title={ CMIE: Combining MLLM Insights with External Evidence for Explainable Out-of-Context Misinformation Detection },
  author={ Fanxiao Li and Jiaying Wu and Canyuan He and Wei Zhou },
  journal={arXiv preprint arXiv:2505.23449},
  year={ 2025 }
}
Comments on this paper