ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.13107
137
1

ClearSight: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large language Models

17 March 2025
Hao Yin
Guangzong Si
Zilei Wang
ArXivPDFHTML
Abstract

Contrastive decoding strategies are widely used to mitigate object hallucinations in multimodal large language models (MLLMs). By reducing over-reliance on language priors, these strategies ensure that generated content remains closely grounded in visual inputs, producing contextually accurate outputs. Since contrastive decoding requires no additional training or external tools, it offers both computational efficiency and versatility, making it highly attractive. However, these methods present two main limitations: (1) bluntly suppressing language priors can compromise coherence and accuracy of generated content, and (2) processing contrastive inputs adds computational load, significantly slowing inference speed. To address these challenges, we propose Visual Amplification Fusion (VAF), a plug-and-play technique that enhances attention to visual signals within the model's middle layers, where modality fusion predominantly occurs. This approach enables more effective capture of visual features, reducing the model's bias toward language modality. Experimental results demonstrate that VAF significantly reduces hallucinations across various MLLMs without affecting inference speed, while maintaining coherence and accuracy in generated outputs.

View on arXiv
@article{yin2025_2503.13107,
  title={ ClearSight: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large language Models },
  author={ Hao Yin and Guangzong Si and Zilei Wang },
  journal={arXiv preprint arXiv:2503.13107},
  year={ 2025 }
}
Comments on this paper