67
0

Towards Embodied Cognition in Robots via Spatially Grounded Synthetic Worlds

Main:1 Pages
1 Figures
Bibliography:2 Pages
Abstract

We present a conceptual framework for training Vision-Language Models (VLMs) to perform Visual Perspective Taking (VPT), a core capability for embodied cognition essential for Human-Robot Interaction (HRI). As a first step toward this goal, we introduce a synthetic dataset, generated in NVIDIA Omniverse, that enables supervised learning for spatial reasoning tasks. Each instance includes an RGB image, a natural language description, and a ground-truth 4X4 transformation matrix representing object pose. We focus on inferring Z-axis distance as a foundational skill, with future extensions targeting full 6 Degrees Of Freedom (DOFs) reasoning. The dataset is publicly available to support further research. This work serves as a foundational step toward embodied AI systems capable of spatial understanding in interactive human-robot scenarios.

View on arXiv
@article{currie2025_2505.14366,
  title={ Towards Embodied Cognition in Robots via Spatially Grounded Synthetic Worlds },
  author={ Joel Currie and Gioele Migno and Enrico Piacenti and Maria Elena Giannaccini and Patric Bach and Davide De Tommaso and Agnieszka Wykowska },
  journal={arXiv preprint arXiv:2505.14366},
  year={ 2025 }
}
Comments on this paper