Toward Real-World Cooperative and Competitive Soccer with Quadrupedal Robot Teams

Achieving coordinated teamwork among legged robots requires both fine-grained locomotion control and long-horizon strategic decision-making. Robot soccer offers a compelling testbed for this challenge, combining dynamic, competitive, and multi-agent interactions. In this work, we present a hierarchical multi-agent reinforcement learning (MARL) framework that enables fully autonomous and decentralized quadruped robot soccer. First, a set of highly dynamic low-level skills is trained for legged locomotion and ball manipulation, such as walking, dribbling, and kicking. On top of these, a high-level strategic planning policy is trained with Multi-Agent Proximal Policy Optimization (MAPPO) via Fictitious Self-Play (FSP). This learning framework allows agents to adapt to diverse opponent strategies and gives rise to sophisticated team behaviors, including coordinated passing, interception, and dynamic role allocation. With an extensive ablation study, the proposed learning method shows significant advantages in the cooperative and competitive multi-agent soccer game. We deploy the learned policies to real quadruped robots relying solely on onboard proprioception and decentralized localization, with the resulting system supporting autonomous robot-robot and robot-human soccer matches on indoor and outdoor soccer courts.
View on arXiv@article{su2025_2505.13834, title={ Toward Real-World Cooperative and Competitive Soccer with Quadrupedal Robot Teams }, author={ Zhi Su and Yuman Gao and Emily Lukas and Yunfei Li and Jiaze Cai and Faris Tulbah and Fei Gao and Chao Yu and Zhongyu Li and Yi Wu and Koushil Sreenath }, journal={arXiv preprint arXiv:2505.13834}, year={ 2025 } }