ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.12619
7
0

HIL: Hybrid Imitation Learning of Diverse Parkour Skills from Videos

19 May 2025
Jingbo Wang
Yifeng Jiang
Haotian Zhang
Chen Tessler
Davis Rempe
Jessica Hodgins
Xue Bin Peng
ArXivPDFHTML
Abstract

Recent data-driven methods leveraging deep reinforcement learning have been an effective paradigm for developing controllers that enable physically simulated characters to produce natural human-like behaviors. However, these data-driven methods often struggle to adapt to novel environments and compose diverse skills coherently to perform more complex tasks. To address these challenges, we propose a hybrid imitation learning (HIL) framework that combines motion tracking, for precise skill replication, with adversarial imitation learning, to enhance adaptability and skill composition. This hybrid learning framework is implemented through parallel multi-task environments and a unified observation space, featuring an agent-centric scene representation to facilitate effective learning from the hybrid parallel environments. Our framework trains a unified controller on parkour data sourced from Internet videos, enabling a simulated character to traverse through new environments using diverse and life-like parkour skills. Evaluations across challenging parkour environments demonstrate that our method improves motion quality, increases skill diversity, and achieves competitive task completion compared to previous learning-based methods.

View on arXiv
@article{wang2025_2505.12619,
  title={ HIL: Hybrid Imitation Learning of Diverse Parkour Skills from Videos },
  author={ Jiashun Wang and Yifeng Jiang and Haotian Zhang and Chen Tessler and Davis Rempe and Jessica Hodgins and Xue Bin Peng },
  journal={arXiv preprint arXiv:2505.12619},
  year={ 2025 }
}
Comments on this paper