ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.06729
10
0

Mitigating Object Hallucination via Robust Local Perception Search

7 June 2025
Zixian Gao
Chao Yang
Zhanhui Zhou
Xing Xu
Chaochao Lu
    MLLM
ArXiv (abs)PDFHTML
Main:7 Pages
5 Figures
Bibliography:3 Pages
3 Tables
Appendix:1 Pages
Abstract

Recent advancements in Multimodal Large Language Models (MLLMs) have enabled them to effectively integrate vision and language, addressing a variety of downstream tasks. However, despite their significant success, these models still exhibit hallucination phenomena, where the outputs appear plausible but do not align with the content of the images. To mitigate this issue, we introduce Local Perception Search (LPS), a decoding method during inference that is both simple and training-free, yet effectively suppresses hallucinations. This method leverages local visual prior information as a value function to correct the decoding process. Additionally, we observe that the impact of the local visual prior on model performance is more pronounced in scenarios with high levels of image noise. Notably, LPS is a plug-and-play approach that is compatible with various models. Extensive experiments on widely used hallucination benchmarks and noisy data demonstrate that LPS significantly reduces the incidence of hallucinations compared to the baseline, showing exceptional performance, particularly in noisy settings.

View on arXiv
@article{gao2025_2506.06729,
  title={ Mitigating Object Hallucination via Robust Local Perception Search },
  author={ Zixian Gao and Chao Yang and Zhanhui Zhou and Xing Xu and Chaochao Lu },
  journal={arXiv preprint arXiv:2506.06729},
  year={ 2025 }
}
Comments on this paper