10
0

Reinforcing VLMs to Use Tools for Detailed Visual Reasoning Under Resource Constraints

Main:6 Pages
4 Figures
Bibliography:3 Pages
2 Tables
Appendix:1 Pages
Abstract

Despite tremendous recent advances in large model reasoning ability, vision-language models (VLMs) still struggle with detailed visual reasoning, especially when compute resources are limited. To address this challenge, we draw inspiration from methods like Deepseek-r1 for VLMs and train smaller-scale models with Group Relative Policy Optimization (GRPO) to use external tools such as zoom. The greatest benefit is obtained with a combination of GRPO learning, a simple reward structure, a simplified tool-calling interface, allocating additional tokens to the result of the tool call, and a training data mix that over-represents visually difficult examples. Compared to similarly-sized baseline models, our method achieves better performance on some visual question-answering (VQA) tasks, thanks to the detailed visual information gathered from the external tool.

View on arXiv
@article{kumar2025_2506.14821,
  title={ Reinforcing VLMs to Use Tools for Detailed Visual Reasoning Under Resource Constraints },
  author={ Sunil Kumar and Bowen Zhao and Leo Dirac and Paulina Varshavskaya },
  journal={arXiv preprint arXiv:2506.14821},
  year={ 2025 }
}
Comments on this paper