124

Evaluating Foundation Models' 3D Understanding Through Multi-View Correspondence Analysis

Valentina Lilova
Toyesh Chakravorty
Julian I. Bibo
Emma Boccaletti
Brandon Li
Lívia Baxová
Cees G. M. Snoek
Mohammadreza Salehi
Main:10 Pages
30 Figures
Bibliography:4 Pages
11 Tables
Appendix:13 Pages
Abstract

Benchmarking 3D spatial understanding of foundation models is essential for real-world applications such as robotics and autonomous driving. Existing evaluations often rely on downstream finetuning with linear heads or task-specific decoders, making it difficult to isolate the intrinsic 3D reasoning ability of pretrained encoders. In this work, we introduce a novel benchmark for in-context 3D scene understanding that requires no finetuning and directly probes the quality of dense visual features. Building on the Hummingbird framework, which evaluates in-context 2D scene understanding, we extend the setup to the 3D Multi-View ImageNet (MVImgNet) dataset. Given a set of images from objects in specific angles (keys), we benchmark the performance of segmenting novel views (queries) and report the scores in 4 categories of easy, medium, hard, and extreme based on the key-query view contrast. We benchmark 8 state-of-the-art foundation models and show DINO-based encoders remain competitive across large viewpoint shifts, while 3D-aware models like VGGT require dedicated multi-view adjustments. Our code is publicly available atthis https URL.

View on arXiv
Comments on this paper