ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08564
41
0

MoE-Loco: Mixture of Experts for Multitask Locomotion

11 March 2025
Runhan Huang
Shaoting Zhu
Yilun Du
Hang Zhao
    MoE
ArXivPDFHTML
Abstract

We present MoE-Loco, a Mixture of Experts (MoE) framework for multitask locomotion for legged robots. Our method enables a single policy to handle diverse terrains, including bars, pits, stairs, slopes, and baffles, while supporting quadrupedal and bipedal gaits. Using MoE, we mitigate the gradient conflicts that typically arise in multitask reinforcement learning, improving both training efficiency and performance. Our experiments demonstrate that different experts naturally specialize in distinct locomotion behaviors, which can be leveraged for task migration and skill composition. We further validate our approach in both simulation and real-world deployment, showcasing its robustness and adaptability.

View on arXiv
@article{huang2025_2503.08564,
  title={ MoE-Loco: Mixture of Experts for Multitask Locomotion },
  author={ Runhan Huang and Shaoting Zhu and Yilun Du and Hang Zhao },
  journal={arXiv preprint arXiv:2503.08564},
  year={ 2025 }
}
Comments on this paper