ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19474
55
0

Causal-LLaVA: Causal Disentanglement for Mitigating Hallucination in Multimodal Large Language Models

26 May 2025
Xinmiao Hu
C. Wang
Ruihe An
ChenYu Shao
Xiaojun Ye
Sheng Zhou
Liangcheng Li
    MLLMLRM
ArXiv (abs)PDFHTML
Main:9 Pages
20 Figures
Bibliography:5 Pages
5 Tables
Appendix:7 Pages
Abstract

Multimodal Large Language Models (MLLMs) have demonstrated strong performance in visual understanding tasks, yet they often suffer from object hallucinations--generating descriptions of objects that are inconsistent with or entirely absent from the input. This issue is closely related to dataset biases, where frequent co-occurrences of objects lead to entangled semantic representations across modalities. As a result, models may erroneously activate object representations that are commonly associated with the input but not actually present.To address this, we propose a causality-driven disentanglement framework that mitigates hallucinations through causal intervention. Our approach includes a Causal-Driven Projector in the visual pathway and a Causal Intervention Module integrated into the final transformer layer of the language model. These components work together to reduce spurious correlations caused by biased training data.Experimental results show that our method significantly reduces hallucinations while maintaining strong performance on multiple multimodal benchmarks. Visualization analyses further confirm improved separability of object representations.The code is available at:this https URL

View on arXiv
@article{hu2025_2505.19474,
  title={ Causal-LLaVA: Causal Disentanglement for Mitigating Hallucination in Multimodal Large Language Models },
  author={ Xinmiao Hu and Chun Wang and Ruihe An and ChenYu Shao and Xiaojun Ye and Sheng Zhou and Liangcheng Li },
  journal={arXiv preprint arXiv:2505.19474},
  year={ 2025 }
}
Comments on this paper