67
1

TREND: Tri-teaching for Robust Preference-based Reinforcement Learning with Demonstrations

Abstract

Preference feedback collected by human or VLM annotators is often noisy, presenting a significant challenge for preference-based reinforcement learning that relies on accurate preference labels. To address this challenge, we propose TREND, a novel framework that integrates few-shot expert demonstrations with a tri-teaching strategy for effective noise mitigation. Our method trains three reward models simultaneously, where each model views its small-loss preference pairs as useful knowledge and teaches such useful pairs to its peer network for updating the parameters. Remarkably, our approach requires as few as one to three expert demonstrations to achieve high performance. We evaluate TREND on various robotic manipulation tasks, achieving up to 90% success rates even with noise levels as high as 40%, highlighting its effective robustness in handling noisy preference feedback. Project page:this https URL.

View on arXiv
@article{huang2025_2505.06079,
  title={ TREND: Tri-teaching for Robust Preference-based Reinforcement Learning with Demonstrations },
  author={ Shuaiyi Huang and Mara Levy and Anubhav Gupta and Daniel Ekpo and Ruijie Zheng and Abhinav Shrivastava },
  journal={arXiv preprint arXiv:2505.06079},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.