6
0

VL-GenRM: Enhancing Vision-Language Verification via Vision Experts and Iterative Training

Main:9 Pages
5 Figures
Bibliography:4 Pages
15 Tables
Appendix:10 Pages
Abstract

Reinforcement Fine-Tuning (RFT) with verifiable rewards has advanced large language models but remains underexplored for Vision-Language (VL) models. The Vision-Language Reward Model (VL-RM) is key to aligning VL models by providing structured feedback, yet training effective VL-RMs faces two major challenges. First, the bootstrapping dilemma arises as high-quality training data depends on already strong VL models, creating a cycle where self-generated supervision reinforces existing biases. Second, modality bias and negative example amplification occur when VL models hallucinate incorrect visual attributes, leading to flawed preference data that further misguides training. To address these issues, we propose an iterative training framework leveraging vision experts, Chain-of-Thought (CoT) rationales, and Margin-based Rejection Sampling. Our approach refines preference datasets, enhances structured critiques, and iteratively improves reasoning. Experiments across VL-RM benchmarks demonstrate superior performance in hallucination detection and multimodal reasoning, advancing VL model alignment with reinforcement learning.

View on arXiv
@article{zhang2025_2506.13888,
  title={ VL-GenRM: Enhancing Vision-Language Verification via Vision Experts and Iterative Training },
  author={ Jipeng Zhang and Kehao Miao and Renjie Pi and Zhaowei Wang and Runtao Liu and Rui Pan and Tong Zhang },
  journal={arXiv preprint arXiv:2506.13888},
  year={ 2025 }
}
Comments on this paper