27
0

HoMeR: Learning In-the-Wild Mobile Manipulation via Hybrid Imitation and Whole-Body Control

Main:9 Pages
10 Figures
Bibliography:4 Pages
Appendix:5 Pages
Abstract

We introduce HoMeR, an imitation learning framework for mobile manipulation that combines whole-body control with hybrid action modes that handle both long-range and fine-grained motion, enabling effective performance on realistic in-the-wild tasks. At its core is a fast, kinematics-based whole-body controller that maps desired end-effector poses to coordinated motion across the mobile base and arm. Within this reduced end-effector action space, HoMeR learns to switch between absolute pose predictions for long-range movement and relative pose predictions for fine-grained manipulation, offloading low-level coordination to the controller and focusing learning on task-level decisions. We deploy HoMeR on a holonomic mobile manipulator with a 7-DoF arm in a real home. We compare HoMeR to baselines without hybrid actions or whole-body control across 3 simulated and 3 real household tasks such as opening cabinets, sweeping trash, and rearranging pillows. Across tasks, HoMeR achieves an overall success rate of 79.17% using just 20 demonstrations per task, outperforming the next best baseline by 29.17 on average. HoMeR is also compatible with vision-language models and can leverage their internet-scale priors to better generalize to novel object appearances, layouts, and cluttered scenes. In summary, HoMeR moves beyond tabletop settings and demonstrates a scalable path toward sample-efficient, generalizable manipulation in everyday indoor spaces. Code, videos, and supplementary material are available at:this http URL

View on arXiv
@article{sundaresan2025_2506.01185,
  title={ HoMeR: Learning In-the-Wild Mobile Manipulation via Hybrid Imitation and Whole-Body Control },
  author={ Priya Sundaresan and Rhea Malhotra and Phillip Miao and Jingyun Yang and Jimmy Wu and Hengyuan Hu and Rika Antonova and Francis Engelmann and Dorsa Sadigh and Jeannette Bohg },
  journal={arXiv preprint arXiv:2506.01185},
  year={ 2025 }
}
Comments on this paper