41
0

MMPerspective: Do MLLMs Understand Perspective? A Comprehensive Benchmark for Perspective Perception, Reasoning, and Robustness

Main:9 Pages
42 Figures
Bibliography:6 Pages
5 Tables
Appendix:14 Pages
Abstract

Understanding perspective is fundamental to human visual perception, yet the extent to which multimodal large language models (MLLMs) internalize perspective geometry remains unclear. We introduce MMPerspective, the first benchmark specifically designed to systematically evaluate MLLMs' understanding of perspective through 10 carefully crafted tasks across three complementary dimensions: Perspective Perception, Reasoning, and Robustness. Our benchmark comprises 2,711 real-world and synthetic image instances with 5,083 question-answer pairs that probe key capabilities, such as vanishing point perception and counting, perspective type reasoning, line relationship understanding in 3D space, invariance to perspective-preserving transformations, etc. Through a comprehensive evaluation of 43 state-of-the-art MLLMs, we uncover significant limitations: while models demonstrate competence on surface-level perceptual tasks, they struggle with compositional reasoning and maintaining spatial consistency under perturbations. Our analysis further reveals intriguing patterns between model architecture, scale, and perspective capabilities, highlighting both robustness bottlenecks and the benefits of chain-of-thought prompting. MMPerspective establishes a valuable testbed for diagnosing and advancing spatial understanding in vision-language systems. Resources available at:this https URL

View on arXiv
@article{tang2025_2505.20426,
  title={ MMPerspective: Do MLLMs Understand Perspective? A Comprehensive Benchmark for Perspective Perception, Reasoning, and Robustness },
  author={ Yunlong Tang and Pinxin Liu and Mingqian Feng and Zhangyun Tan and Rui Mao and Chao Huang and Jing Bi and Yunzhong Xiao and Susan Liang and Hang Hua and Ali Vosoughi and Luchuan Song and Zeliang Zhang and Chenliang Xu },
  journal={arXiv preprint arXiv:2505.20426},
  year={ 2025 }
}
Comments on this paper