52
0

ViCrit: A Verifiable Reinforcement Learning Proxy Task for Visual Perception in VLMs

Main:13 Pages
6 Figures
Bibliography:5 Pages
6 Tables
Appendix:5 Pages
Abstract

Reinforcement learning (RL) has shown great effectiveness for fine-tuning large language models (LLMs) using tasks that are challenging yet easily verifiable, such as math reasoning or code generation. However, extending this success to visual perception in vision-language models (VLMs) has been impeded by the scarcity of vision-centric tasks that are simultaneously challenging and unambiguously verifiable. To this end, we introduce ViCrit (Visual Caption Hallucination Critic), an RL proxy task that trains VLMs to localize a subtle, synthetic visual hallucination injected into paragraphs of human-written image captions. Starting from a 200-word captions, we inject a single, subtle visual description error-altering a few words on objects, attributes, counts, or spatial relations-and task the model to pinpoint the corrupted span given the image and the modified caption. This formulation preserves the full perceptual difficulty while providing a binary, exact-match reward that is easy to compute and unambiguous. Models trained with the ViCrit Task exhibit substantial gains across a variety of VL benchmarks. Crucially, the improvements transfer beyond natural-image training data to abstract image reasoning and visual math, showing promises of learning to perceive rather than barely memorizing seen objects. To facilitate evaluation, we further introduce ViCrit-Bench, a category-balanced diagnostic benchmark that systematically probes perception errors across diverse image domains and error types. Together, our results demonstrate that fine-grained hallucination criticism is an effective and generalizable objective for enhancing visual perception in VLMs.

View on arXiv
@article{wang2025_2506.10128,
  title={ ViCrit: A Verifiable Reinforcement Learning Proxy Task for Visual Perception in VLMs },
  author={ Xiyao Wang and Zhengyuan Yang and Chao Feng and Yongyuan Liang and Yuhang Zhou and Xiaoyu Liu and Ziyi Zang and Ming Li and Chung-Ching Lin and Kevin Lin and Linjie Li and Furong Huang and Lijuan Wang },
  journal={arXiv preprint arXiv:2506.10128},
  year={ 2025 }
}
Comments on this paper