ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.13653
39
140

Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning

26 April 2023
Tuomas Haarnoja
Ben Moran
Guy Lever
Sandy H. Huang
Dhruva Tirumala
Jan Humplik
Markus Wulfmeier
S. Tunyasuvunakool
Noah Y. Siegel
Roland Hafner
Michael Bloesch
Kristian Hartikainen
Arunkumar Byravan
Leonard Hasenclever
Yuval Tassa
Fereshteh Sadeghi
Nathan Batchelor
Federico Casarini
Stefano Saliceti
Charles Game
Neil Sreendra
Kushal Patel
Marlon Gwira
Andrea Huber
N. Hurley
F. Nori
R. Hadsell
N. Heess
ArXivPDFHTML
Abstract

We investigate whether Deep Reinforcement Learning (Deep RL) is able to synthesize sophisticated and safe movement skills for a low-cost, miniature humanoid robot that can be composed into complex behavioral strategies in dynamic environments. We used Deep RL to train a humanoid robot with 20 actuated joints to play a simplified one-versus-one (1v1) soccer game. The resulting agent exhibits robust and dynamic movement skills such as rapid fall recovery, walking, turning, kicking and more; and it transitions between them in a smooth, stable, and efficient manner. The agent's locomotion and tactical behavior adapts to specific game contexts in a way that would be impractical to manually design. The agent also developed a basic strategic understanding of the game, and learned, for instance, to anticipate ball movements and to block opponent shots. Our agent was trained in simulation and transferred to real robots zero-shot. We found that a combination of sufficiently high-frequency control, targeted dynamics randomization, and perturbations during training in simulation enabled good-quality transfer. Although the robots are inherently fragile, basic regularization of the behavior during training led the robots to learn safe and effective movements while still performing in a dynamic and agile way -- well beyond what is intuitively expected from the robot. Indeed, in experiments, they walked 181% faster, turned 302% faster, took 63% less time to get up, and kicked a ball 34% faster than a scripted baseline, while efficiently combining the skills to achieve the longer term objectives.

View on arXiv
Comments on this paper