ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.04039
61
0

Mitigating Hallucinations in Large Vision-Language Models via Entity-Centric Multimodal Preference Optimization

4 June 2025
Jiulong Wu
Zhengliang Shi
Shuaiqiang Wang
J. Huang
Dawei Yin
Lingyong Yan
Min Cao
Min Zhang
    MLLM
ArXiv (abs)PDFHTML
Main:2 Pages
11 Figures
4 Tables
Appendix:14 Pages
Abstract

Large Visual Language Models (LVLMs) have demonstrated impressive capabilities across multiple tasks. However, their trustworthiness is often challenged by hallucinations, which can be attributed to the modality misalignment and the inherent hallucinations of their underlying Large Language Models (LLMs) backbone. Existing preference alignment methods focus on aligning model responses with human preferences while neglecting image-text modality alignment, resulting in over-reliance on LLMs and hallucinations. In this paper, we propose Entity-centric Multimodal Preference Optimization (EMPO), which achieves enhanced modality alignment than existing human preference alignment methods. Besides, to overcome the scarcity of high-quality multimodal preference data, we utilize open-source instruction datasets to automatically construct high-quality preference data across three aspects: image, instruction, and response. Experiments on two human preference datasets and five multimodal hallucination benchmarks demonstrate the effectiveness of EMPO, e.g., reducing hallucination rates by 85.9% on Object-HalBench and 49.8% on MM-HalBench.

View on arXiv
@article{wu2025_2506.04039,
  title={ Mitigating Hallucinations in Large Vision-Language Models via Entity-Centric Multimodal Preference Optimization },
  author={ Jiulong Wu and Zhengliang Shi and Shuaiqiang Wang and Jizhou Huang and Dawei Yin and Lingyong Yan and Min Cao and Min Zhang },
  journal={arXiv preprint arXiv:2506.04039},
  year={ 2025 }
}
Comments on this paper