ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.12769
27
0

RL from Physical Feedback: Aligning Large Motion Models with Humanoid Control

15 June 2025
Junpeng Yue
Zepeng Wang
Yuxuan Wang
Weishuai Zeng
Jiangxing Wang
Xinrun Xu
Yu Zhang
Sipeng Zheng
Ziluo Ding
Zongqing Lu
    AI4CE
ArXiv (abs)PDFHTML
Main:9 Pages
4 Figures
Bibliography:1 Pages
9 Tables
Appendix:5 Pages
Abstract

This paper focuses on a critical challenge in robotics: translating text-driven human motions into executable actions for humanoid robots, enabling efficient and cost-effective learning of new behaviors. While existing text-to-motion generation methods achieve semantic alignment between language and motion, they often produce kinematically or physically infeasible motions unsuitable for real-world deployment. To bridge this sim-to-real gap, we propose Reinforcement Learning from Physical Feedback (RLPF), a novel framework that integrates physics-aware motion evaluation with text-conditioned motion generation. RLPF employs a motion tracking policy to assess feasibility in a physics simulator, generating rewards for fine-tuning the motion generator. Furthermore, RLPF introduces an alignment verification module to preserve semantic fidelity to text instructions. This joint optimization ensures both physical plausibility and instruction alignment. Extensive experiments show that RLPF greatly outperforms baseline methods in generating physically feasible motions while maintaining semantic correspondence with text instruction, enabling successful deployment on real humanoid robots.

View on arXiv
Comments on this paper