7

XEmbodied: A Foundation Model with Enhanced Geometric and Physical Cues for Large-Scale Embodied Environments

Kangan Qian
ChuChu Xie
Yang Zhong
Jingrui Pang
Siwen Jiao
Sicong Jiang
Zilin Huang
Yunlong Wang
Kun Jiang
Mengmeng Yang
Hao Ye
Guanghao Zhang
Hangjun Ye
Guang Chen
Long Chen
Diange Yang
Main:14 Pages
29 Figures
Bibliography:9 Pages
18 Tables
Appendix:51 Pages
Abstract

Vision-Language-Action (VLA) models drive next-generation autonomous systems, but training them requires scalable, high-quality annotations from complex environments. Current cloud pipelines rely on generic vision-language models (VLMs) that lack geometric reasoning and domain semantics due to their 2D image-text pretraining. To address this mismatch, we propose XEmbodied, a cloud-side foundation model that endows VLMs with intrinsic 3D geometric awareness and interaction with physical cues (e.g., occupancy grids, 3D boxes). Instead of treating geometry as auxiliary input, XEmbodied integrates geometric representations via a structured 3D Adapter and distills physical signals into context tokens using an Efficient Image-Embodied Adapter. Through progressive domain curriculum and reinforcement learning post-training, XEmbodied preserves general capabilities while demonstrating robust performance across 18 public benchmarks. It significantly improves spatial reasoning, traffic semantics, embodied affordance, and out-of-distribution generalization for large-scale scenario mining and embodied VQA.

View on arXiv
Comments on this paper