14
0

VReST: Enhancing Reasoning in Large Vision-Language Models through Tree Search and Self-Reward Mechanism

Main:8 Pages
6 Figures
Bibliography:3 Pages
8 Tables
Appendix:9 Pages
Abstract

Large Vision-Language Models (LVLMs) have shown exceptional performance in multimodal tasks, but their effectiveness in complex visual reasoning is still constrained, especially when employing Chain-of-Thought prompting techniques. In this paper, we propose VReST, a novel training-free approach that enhances Reasoning in LVLMs through Monte Carlo Tree Search and Self-Reward mechanisms. VReST meticulously traverses the reasoning landscape by establishing a search tree, where each node encapsulates a reasoning step, and each path delineates a comprehensive reasoning sequence. Our innovative multimodal Self-Reward mechanism assesses the quality of reasoning steps by integrating the utility of sub-questions, answer correctness, and the relevance of vision-language clues, all without the need for additional models. VReST surpasses current prompting methods and secures state-of-the-art performance across three multimodal mathematical reasoning benchmarks. Furthermore, it substantiates the efficacy of test-time scaling laws in multimodal tasks, offering a promising direction for future research.

View on arXiv
@article{zhang2025_2506.08691,
  title={ VReST: Enhancing Reasoning in Large Vision-Language Models through Tree Search and Self-Reward Mechanism },
  author={ Congzhi Zhang and Jiawei Peng and Zhenglin Wang and Yilong Lai and Haowen Sun and Heng Chang and Fei Ma and Weijiang Yu },
  journal={arXiv preprint arXiv:2506.08691},
  year={ 2025 }
}
Comments on this paper