102
0
v1v2 (latest)

SLAC: Simulation-Pretrained Latent Action Space for Whole-Body Real-World RL

Main:8 Pages
3 Figures
Bibliography:5 Pages
3 Tables
Appendix:4 Pages
Abstract

Building capable household and industrial robots requires mastering the control of versatile, high-degree-of-freedom (DoF) systems such as mobile manipulators. While reinforcement learning (RL) holds promise for autonomously acquiring robot control policies, scaling it to high-DoF embodiments remains challenging. Direct RL in the real world demands both safe exploration and high sample efficiency, which are difficult to achieve in practice. Sim-to-real RL, on the other hand, is often brittle due to the reality gap. This paper introduces SLAC, a method that renders real-world RL feasible for complex embodiments by leveraging a low-fidelity simulator to pretrain a task-agnostic latent action space. SLAC trains this latent action space via a customized unsupervised skill discovery method designed to promote temporal abstraction, disentanglement, and safety, thereby facilitating efficient downstream learning. Once a latent action space is learned, SLAC uses it as the action interface for a novel off-policy RL algorithm to autonomously learn downstream tasks through real-world interactions. We evaluate SLAC against existing methods on a suite of bimanual mobile manipulation tasks, where it achieves state-of-the-art performance. Notably, SLAC learns contact-rich whole-body tasks in under an hour of real-world interactions, without relying on any demonstrations or hand-crafted behavior priors. More information, code, and videos atthis http URL

View on arXiv
@article{hu2025_2506.04147,
  title={ SLAC: Simulation-Pretrained Latent Action Space for Whole-Body Real-World RL },
  author={ Jiaheng Hu and Peter Stone and Roberto Martín-Martín },
  journal={arXiv preprint arXiv:2506.04147},
  year={ 2025 }
}
Comments on this paper