Recent advancements in reinforcement learning (RL) have led to significant progress in humanoid robot locomotion, simplifying the design and training of motion policies in simulation. However, the numerous implementation details make transferring these policies to real-world robots a challenging task. To address this, we have developed a comprehensive code framework that covers the entire process from training to deployment, incorporating common RL training methods, domain randomization, reward function design, and solutions for handling parallel structures. This library is made available as a community resource, with detailed descriptions of its design and experimental results. We validate the framework on the Booster T1 robot, demonstrating that the trained policies seamlessly transfer to the physical platform, enabling capabilities such as omnidirectional walking, disturbance resistance, and terrain adaptability. We hope this work provides a convenient tool for the robotics community, accelerating the development of humanoid robots. The code can be found inthis https URL.
View on arXiv@article{wang2025_2506.15132, title={ Booster Gym: An End-to-End Reinforcement Learning Framework for Humanoid Robot Locomotion }, author={ Yushi Wang and Penghui Chen and Xinyu Han and Feng Wu and Mingguo Zhao }, journal={arXiv preprint arXiv:2506.15132}, year={ 2025 } }