10
0

SIRI-Bench: Challenging VLMs' Spatial Intelligence through Complex Reasoning Tasks

Main:11 Pages
9 Figures
Bibliography:4 Pages
Appendix:1 Pages
Abstract

Large Language Models (LLMs) are experiencing rapid advancements in complex reasoning, exhibiting remarkable generalization in mathematics and programming. In contrast, while spatial intelligence is fundamental for Vision-Language Models (VLMs) in real-world interaction, the systematic evaluation of their complex reasoning ability within spatial contexts remains underexplored. To bridge this gap, we introduce SIRI-Bench, a benchmark designed to evaluate VLMs' spatial intelligence through video-based reasoning tasks. SIRI-Bench comprises nearly 1K video-question-answer triplets, where each problem is embedded in a realistic 3D scene and captured by video. By carefully designing questions and corresponding 3D scenes, our benchmark ensures that solving the questions requires both spatial comprehension for extracting information and high-level reasoning for deriving solutions, making it a challenging benchmark for evaluating VLMs. To facilitate large-scale data synthesis, we develop an Automatic Scene Creation Engine. This engine, leveraging multiple specialized LLM agents, can generate realistic 3D scenes from abstract math problems, ensuring faithfulness to the original descriptions. Experimental results reveal that state-of-the-art VLMs struggle significantly on SIRI-Bench, underscoring the challenge of spatial reasoning. We hope that our study will bring researchers' attention to spatially grounded reasoning and advance VLMs in visual problem-solving.

View on arXiv
@article{song2025_2506.14512,
  title={ SIRI-Bench: Challenging VLMs' Spatial Intelligence through Complex Reasoning Tasks },
  author={ Zijian Song and Xiaoxin Lin and Qiuming Huang and Guangrun Wang and Liang Lin },
  journal={arXiv preprint arXiv:2506.14512},
  year={ 2025 }
}
Comments on this paper