ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.10911
17
0

ReWiND: Language-Guided Rewards Teach Robot Policies without New Demonstrations

16 May 2025
Jiahui Zhang
Yusen Luo
Abrar Anwar
S. Sontakke
Joseph J. Lim
Jesse Thomason
Erdem Biyik
Jesse Zhang
    OffRL
    LM&Ro
ArXivPDFHTML
Abstract

We introduce ReWiND, a framework for learning robot manipulation tasks solely from language instructions without per-task demonstrations. Standard reinforcement learning (RL) and imitation learning methods require expert supervision through human-designed reward functions or demonstrations for every new task. In contrast, ReWiND starts from a small demonstration dataset to learn: (1) a data-efficient, language-conditioned reward function that labels the dataset with rewards, and (2) a language-conditioned policy pre-trained with offline RL using these rewards. Given an unseen task variation, ReWiND fine-tunes the pre-trained policy using the learned reward function, requiring minimal online interaction. We show that ReWiND's reward model generalizes effectively to unseen tasks, outperforming baselines by up to 2.4x in reward generalization and policy alignment metrics. Finally, we demonstrate that ReWiND enables sample-efficient adaptation to new tasks, beating baselines by 2x in simulation and improving real-world pretrained bimanual policies by 5x, taking a step towards scalable, real-world robot learning. See website atthis https URL.

View on arXiv
@article{zhang2025_2505.10911,
  title={ ReWiND: Language-Guided Rewards Teach Robot Policies without New Demonstrations },
  author={ Jiahui Zhang and Yusen Luo and Abrar Anwar and Sumedh Anand Sontakke and Joseph J Lim and Jesse Thomason and Erdem Biyik and Jesse Zhang },
  journal={arXiv preprint arXiv:2505.10911},
  year={ 2025 }
}
Comments on this paper