Mitigating Hallucinations via Inter-Layer Consistency Aggregation in Large Vision-Language Models

Despite the impressive capabilities of Large Vision-Language Models (LVLMs), they remain susceptible to hallucinations-generating content that is inconsistent with the input image. Existing training-free hallucination mitigation methods often suffer from unstable performance and high sensitivity to hyperparameter settings, limiting their practicality and broader adoption. In this paper, we propose a novel decoding mechanism, Decoding with Inter-layer Consistency via Layer Aggregation (DCLA), which requires no retraining, fine-tuning, or access to external knowledge bases. Specifically, our approach constructs a dynamic semantic reference by aggregating representations from previous layers, and corrects semantically deviated layers to enforce inter-layer consistency. The method allows DCLA to robustly mitigate hallucinations across multiple LVLMs. Experiments on hallucination benchmarks such as MME and POPE demonstrate that DCLA effectively reduces hallucinations while enhancing the reliability and performance of LVLMs.
View on arXiv@article{tang2025_2505.12343, title={ Mitigating Hallucinations via Inter-Layer Consistency Aggregation in Large Vision-Language Models }, author={ Kai Tang and Jinhao You and Xiuqi Ge and Hanze Li and Yichen Guo and Xiande Huang }, journal={arXiv preprint arXiv:2505.12343}, year={ 2025 } }