36

Z3D: Zero-Shot 3D Visual Grounding from Images

Nikita Drozdov
Andrey Lemeshko
Nikita Gavrilov
Anton Konushin
Danila Rukhovich
Maksim Kolodiazhnyi
Main:3 Pages
4 Figures
Bibliography:2 Pages
8 Tables
Appendix:3 Pages
Abstract

3D visual grounding (3DVG) aims to localize objects in a 3D scene based on natural language queries. In this work, we explore zero-shot 3DVG from multi-view images alone, without requiring any geometric supervision or object priors. We introduce Z3D, a universal grounding pipeline that flexibly operates on multi-view images while optionally incorporating camera poses and depth maps. We identify key bottlenecks in prior zero-shot methods causing significant performance degradation and address them with (i) a state-of-the-art zero-shot 3D instance segmentation method to generate high-quality 3D bounding box proposals and (ii) advanced reasoning via prompt-based segmentation, which utilizes full capabilities of modern VLMs. Extensive experiments on the ScanRefer and Nr3D benchmarks demonstrate that our approach achieves state-of-the-art performance among zero-shot methods. Code is available atthis https URL.

View on arXiv
Comments on this paper