52
v1v2 (latest)

MolmoSpaces: A Large-Scale Open Ecosystem for Robot Navigation and Manipulation

Yejin Kim
Wilbert Pumacay
Omar Rayyan
Max Argus
Winson Han
Eli VanderBilt
Jordi Salvador
Abhay Deshpande
Rose Hendrix
Snehal Jauhri
Shuo Liu
Nur Muhammad Mahi Shafiullah
Maya Guru
Ainaz Eftekhar
Karen Farley
Donovan Clay
Jiafei Duan
Arjun Guru
Piper Wolters
Alvaro Herrasti
Ying-Chun Lee
Georgia Chalvatzaki
Yuchen Cui
Ali Farhadi
Dieter Fox
Ranjay Krishna
Main:17 Pages
16 Figures
Bibliography:5 Pages
5 Tables
Appendix:8 Pages
Abstract

Deploying robots at scale demands robustness to the long tail of everyday situations. The countless variations in scene layout, object geometry, and task specifications that characterize real environments are vast and underrepresented in existing robot benchmarks. Measuring this level of generalization requires infrastructure at a scale and diversity that physical evaluation alone cannot provide. We introduce MolmoSpaces, a fully open ecosystem to support large-scale benchmarking of robot policies. MolmoSpaces consists of over 230k diverse indoor environments, ranging from handcrafted household scenes to procedurally generated multiroom houses, populated with 130k richly annotated object assets, including 48k manipulable objects with 42M stable grasps. Crucially, these environments are simulator-agnostic, supporting popular options such as MuJoCo, Isaac, and ManiSkill. The ecosystem supports the full spectrum of embodied tasks: static and mobile manipulation, navigation, and multiroom long-horizon tasks requiring coordinated perception, planning, and interaction across entire indoor environments. We also design MolmoSpaces-Bench, a benchmark suite of 8 tasks in which robots interact with our diverse scenes and richly annotated objects. Our experiments show MolmoSpaces-Bench exhibits strong sim-to-real correlation (R = 0.96, \r{ho} = 0.98), confirm newer and stronger zero-shot policies outperform earlier versions in our benchmarks, and identify key sensitivities to prompt phrasing, initial joint positions, and camera occlusion. Through MolmoSpaces and its open-source assets and tooling, we provide a foundation for scalable data generation, policy training, and benchmark creation for robot learning research.

View on arXiv
Comments on this paper