ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.01143
68
0

DPR: Diffusion Preference-based Reward for Offline Reinforcement Learning

3 March 2025
Teng Pang
Bingzheng Wang
Guoqiang Wu
Yilong Yin
    OffRL
ArXivPDFHTML
Abstract

Offline preference-based reinforcement learning (PbRL) mitigates the need for reward definition, aligning with human preferences via preference-driven reward feedback without interacting with the environment. However, the effectiveness of preference-driven reward functions depends on the modeling ability of the learning model, which current MLP-based and Transformer-based methods may fail to adequately provide. To alleviate the failure of the reward function caused by insufficient modeling, we propose a novel preference-based reward acquisition method: Diffusion Preference-based Reward (DPR). Unlike previous methods using Bradley-Terry models for trajectory preferences, we use diffusion models to directly model preference distributions for state-action pairs, allowing rewards to be discriminatively obtained from these distributions. In addition, considering the particularity of preference data that only know the internal relationships of paired trajectories, we further propose Conditional Diffusion Preference-based Reward (C-DPR), which leverages relative preference information to enhance the construction of the diffusion model. We apply the above methods to existing offline reinforcement learning algorithms and a series of experiment results demonstrate that the diffusion-based reward acquisition approach outperforms previous MLP-based and Transformer-based methods.

View on arXiv
@article{pang2025_2503.01143,
  title={ DPR: Diffusion Preference-based Reward for Offline Reinforcement Learning },
  author={ Teng Pang and Bingzheng Wang and Guoqiang Wu and Yilong Yin },
  journal={arXiv preprint arXiv:2503.01143},
  year={ 2025 }
}
Comments on this paper