RBench-V: A Primary Assessment for Visual Reasoning Models with Multi-modal Outputs

The rapid advancement of native multi-modal models and omni-models, exemplified by GPT-4o, Gemini, and o3, with their capability to process and generate content across modalities such as text and images, marks a significant milestone in the evolution of intelligence. Systematic evaluation of their multi-modal output capabilities in visual thinking processes (also known as multi-modal chain of thought, M-CoT) becomes critically important. However, existing benchmarks for evaluating multi-modal models primarily focus on assessing multi-modal inputs and text-only reasoning while neglecting the importance of reasoning through multi-modal outputs. In this paper, we present a benchmark, dubbed RBench-V, designed to assess models' vision-indispensable reasoning abilities. To construct RBench-V, we carefully hand-pick 803 questions covering math, physics, counting, and games. Unlike previous benchmarks that typically specify certain input modalities, RBench-V presents problems centered on multi-modal outputs, which require image manipulation such as generating novel images and constructing auxiliary lines to support the reasoning process. We evaluate numerous open- and closed-source models on RBench-V, including o3, Gemini 2.5 Pro, Qwen2.5-VL, etc. Even the best-performing model, o3, achieves only 25.8% accuracy on RBench-V, far below the human score of 82.3%, highlighting that current models struggle to leverage multi-modal reasoning. Data and code are available atthis https URL
View on arXiv@article{guo2025_2505.16770, title={ RBench-V: A Primary Assessment for Visual Reasoning Models with Multi-modal Outputs }, author={ Meng-Hao Guo and Xuanyu Chu and Qianrui Yang and Zhe-Han Mo and Yiqing Shen and Pei-lin Li and Xinjie Lin and Jinnian Zhang and Xin-Sheng Chen and Yi Zhang and Kiyohiro Nakayama and Zhengyang Geng and Houwen Peng and Han Hu and Shi-Min Hu }, journal={arXiv preprint arXiv:2505.16770}, year={ 2025 } }