80
0

Can RLHF be More Efficient with Imperfect Reward Models? A Policy Coverage Perspective

Abstract

Sample efficiency is critical for online Reinforcement Learning from Human Feedback (RLHF). While existing works investigate sample-efficient online exploration strategies, the potential of utilizing misspecified yet relevant reward models to accelerate learning remains underexplored. This paper studies how to transfer knowledge from those imperfect reward models in online RLHF. We start by identifying a novel property of the KL-regularized RLHF objective: \emph{a policy's coverability of the optimal policy is captured by its sub-optimality}. Building on this insight, we propose novel transfer learning principles and a theoretical algorithm with provable benefits compared to standard online learning. Our approach achieves low regret in the early stage by quickly adapting to the best available source reward models without prior knowledge of their quality, and over time, it attains an O~(T)\tilde{O}(\sqrt{T}) regret bound \emph{independent} of structural complexity measures. Empirically, inspired by our theoretical findings, we develop a win-rate-based transfer policy selection method with improved computational efficiency. Moreover, our empirical transfer learning technique is modular and can be integrated with various policy optimization methods, such as DPO, IPO and XPO, to further enhance their performance. We validate the effectiveness of our method through experiments on summarization tasks.

View on arXiv
@article{huang2025_2502.19255,
  title={ Can RLHF be More Efficient with Imperfect Reward Models? A Policy Coverage Perspective },
  author={ Jiawei Huang and Bingcong Li and Christoph Dann and Niao He },
  journal={arXiv preprint arXiv:2502.19255},
  year={ 2025 }
}
Comments on this paper