SpatialReasoner: Towards Explicit and Generalizable 3D Spatial Reasoning

Recent studies in 3D spatial reasoning explore data-driven approaches and achieve enhanced spatial reasoning performance with reinforcement learning (RL). However, these methods typically perform spatial reasoning in an implicit manner, and it remains underexplored whether the acquired 3D knowledge generalizes to unseen question types at any stage of the training. In this work we introduce SpatialReasoner, a novel large vision-language model (LVLM) that address 3D spatial reasoning with explicit 3D representations shared between stages -- 3D perception, computation, and reasoning. Explicit 3D representations provide a coherent interface that supports advanced 3D spatial reasoning and enable us to study the factual errors made by LVLMs. Results show that our SpatialReasoner achieve improved performance on a variety of spatial reasoning benchmarks and generalizes better when evaluating on novel 3D spatial reasoning questions. Our study bridges the 3D parsing capabilities of prior visual foundation models with the powerful reasoning abilities of large language models, opening new directions for 3D spatial reasoning.
View on arXiv@article{ma2025_2504.20024, title={ SpatialReasoner: Towards Explicit and Generalizable 3D Spatial Reasoning }, author={ Wufei Ma and Yu-Cheng Chou and Qihao Liu and Xingrui Wang and Celso de Melo and Jieneng Chen and Jianwen Xie and Alan Yuille }, journal={arXiv preprint arXiv:2504.20024}, year={ 2025 } }