13
0

Mitigating Behavioral Hallucination in Multimodal Large Language Models for Sequential Images

Main:8 Pages
7 Figures
Bibliography:3 Pages
5 Tables
Appendix:3 Pages
Abstract

While multimodal large language models excel at various tasks, they still suffer from hallucinations, which limit their reliability and scalability for broader domain applications. To address this issue, recent research mainly focuses on objective hallucination. However, for sequential images, besides objective hallucination, there is also behavioral hallucination, which is less studied. This work aims to fill in the gap. We first reveal that behavioral hallucinations mainly arise from two key factors: prior-driven bias and the snowball effect. Based on these observations, we introduce SHE (Sequence Hallucination Eradication), a lightweight, two-stage framework that (1) detects hallucinations via visual-textual alignment check using our proposed adaptive temporal window and (2) mitigates them via orthogonal projection onto the joint embedding space. We also propose a new metric (BEACH) to quantify behavioral hallucination severity. Empirical results on standard benchmarks demonstrate that SHE reduces behavioral hallucination by over 10% on BEACH while maintaining descriptive accuracy.

View on arXiv
@article{you2025_2506.07184,
  title={ Mitigating Behavioral Hallucination in Multimodal Large Language Models for Sequential Images },
  author={ Liangliang You and Junchi Yao and Shu Yang and Guimin Hu and Lijie Hu and Di Wang },
  journal={arXiv preprint arXiv:2506.07184},
  year={ 2025 }
}
Comments on this paper