Large Visual Language Models (LVLMs) have demonstrated impressive capabilities across multiple tasks. However, their trustworthiness is often challenged by hallucinations, which can be attributed to the modality misalignment and the inherent hallucinations of their underlying Large Language Models (LLMs) backbone. Existing preference alignment methods focus on aligning model responses with human preferences while neglecting image-text modality alignment, resulting in over-reliance on LLMs and hallucinations. In this paper, we propose Entity-centric Multimodal Preference Optimization (EMPO), which achieves enhanced modality alignment than existing human preference alignment methods. Besides, to overcome the scarcity of high-quality multimodal preference data, we utilize open-source instruction datasets to automatically construct high-quality preference data across three aspects: image, instruction, and response. Experiments on two human preference datasets and five multimodal hallucination benchmarks demonstrate the effectiveness of EMPO, e.g., reducing hallucination rates by 85.9% on Object-HalBench and 49.8% on MM-HalBench.
View on arXiv@article{wu2025_2506.04039, title={ Mitigating Hallucinations in Large Vision-Language Models via Entity-Centric Multimodal Preference Optimization }, author={ Jiulong Wu and Zhengliang Shi and Shuaiqiang Wang and Jizhou Huang and Dawei Yin and Lingyong Yan and Min Cao and Min Zhang }, journal={arXiv preprint arXiv:2506.04039}, year={ 2025 } }