Despite significant advancements in Vision-Language Models (VLMs), the performance of existing VLMs remains hindered by object hallucination, a critical challenge to achieving accurate visual understanding. To address this issue, we propose SECOND: Selective and Contrastive Decoding, a novel approach that enables VLMs to effectively leverage multi-scale visual information with an object-centric manner, closely aligning with human visual perception. SECOND progressively selects and integrates multi-scale visual information, facilitating a more precise interpretation of images. By contrasting these visual information iteratively, SECOND significantly reduces perceptual hallucinations and outperforms a wide range of benchmarks. Our theoretical analysis and experiments highlight the largely unexplored potential of multi-scale application in VLMs, showing that prioritizing and contrasting across scales outperforms existing methods.
View on arXiv@article{park2025_2506.08391, title={ SECOND: Mitigating Perceptual Hallucination in Vision-Language Models via Selective and Contrastive Decoding }, author={ Woohyeon Park and Woojin Kim and Jaeik Kim and Jaeyoung Do }, journal={arXiv preprint arXiv:2506.08391}, year={ 2025 } }