ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.06606
39
1

Benchmarking Multimodal CoT Reward Model Stepwise by Visual Program

9 April 2025
Minghe Gao
Xuqi Liu
Zhongqi Yue
Y. Wu
Shuang Chen
Juncheng Billy Li
Siliang Tang
Fei Wu
Tat-Seng Chua
Yueting Zhuang
    OffRL
    LRM
ArXivPDFHTML
Abstract

Recent advancements in reward signal usage for Large Language Models (LLMs) are remarkable. However, significant challenges exist when transitioning reward signal to the multimodal domain, including labor-intensive annotations, over-reliance on one-step rewards, and inadequate evaluation. To address these issues, we propose SVIP, a novel approach to train a step-level multi-dimensional Chain-of-Thought~(CoT) reward model automatically. It generates code for solving visual tasks and transforms the analysis of code blocks into the evaluation of CoT step as training samples. Then, we train SVIP-Reward model using a multi-head attention mechanism called TriAtt-CoT. The advantages of SVIP-Reward are evident throughout the entire process of MLLM. We also introduce a benchmark for CoT reward model training and testing. Experimental results demonstrate that SVIP-Reward improves MLLM performance across training and inference-time scaling, yielding better results on benchmarks while reducing hallucinations and enhancing reasoning ability.

View on arXiv
@article{gao2025_2504.06606,
  title={ Benchmarking Multimodal CoT Reward Model Stepwise by Visual Program },
  author={ Minghe Gao and Xuqi Liu and Zhongqi Yue and Yang Wu and Shuang Chen and Juncheng Li and Siliang Tang and Fei Wu and Tat-Seng Chua and Yueting Zhuang },
  journal={arXiv preprint arXiv:2504.06606},
  year={ 2025 }
}
Comments on this paper