70
0

Generating 6DoF Object Manipulation Trajectories from Action Description in Egocentric Vision

Main:8 Pages
15 Figures
Bibliography:5 Pages
7 Tables
Appendix:5 Pages
Abstract

Learning to use tools or objects in common scenes, particularly handling them in various ways as instructed, is a key challenge for developing interactive robots. Training models to generate such manipulation trajectories requires a large and diverse collection of detailed manipulation demonstrations for various objects, which is nearly unfeasible to gather at scale. In this paper, we propose a framework that leverages large-scale ego- and exo-centric video datasets -- constructed globally with substantial effort -- of Exo-Ego4D to extract diverse manipulation trajectories at scale. From these extracted trajectories with the associated textual action description, we develop trajectory generation models based on visual and point cloud-based language models. In the recently proposed egocentric vision-based in-a-quality trajectory dataset of HOT3D, we confirmed that our models successfully generate valid object trajectories, establishing a training dataset and baseline models for the novel task of generating 6DoF manipulation trajectories from action descriptions in egocentric vision.

View on arXiv
@article{yoshida2025_2506.03605,
  title={ Generating 6DoF Object Manipulation Trajectories from Action Description in Egocentric Vision },
  author={ Tomoya Yoshida and Shuhei Kurita and Taichi Nishimura and Shinsuke Mori },
  journal={arXiv preprint arXiv:2506.03605},
  year={ 2025 }
}
Comments on this paper