ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.06957
86
6
v1v2v3v4v5 (latest)

Policy Filtration for RLHF to Mitigate Noise in Reward Models

11 September 2024
Chuheng Zhang
Wei Shen
Li Zhao
Xuyun Zhang
Xiaolong Xu
Wanchun Dou
Jiang Biang
    OffRL
ArXiv (abs)PDFHTML
Main:8 Pages
7 Figures
Bibliography:4 Pages
6 Tables
Appendix:12 Pages
Abstract

While direct policy optimization methods exist, pioneering LLMs are fine-tuned with reinforcement learning from human feedback (RLHF) to generate better responses under the supervision of a reward model learned from preference data. One major challenge of RLHF is the inaccuracy of the intermediate reward model, especially in the tasks that requires complex reasoning for the reward model to score a response. We find that the reliability of the reward model varies across responses assigned with different rewards. This motivates us to filter the samples whose rewards may be unreliable to improve the signal-to-noise ratio during policy learning, resulting in Policy Filtration for Proximal Policy Optimization (PF-PPO). To choose a proper policy filtering strategy, we use the coefficient of determination (R2) between the rewards and actual scores on filtered samples as the metrics to help us find promising strategies since it measures how well the rewards filtered by PF-PPO indicate real performance. We provide extensive experiments to validate the effectiveness of PF-PPO in code generation and math reasoning tasks. In code generation, PF-PPO achieves the state-of-the-art performance of 7-billion-parameter models on HumanEval (+7.9%), MBPP (+0.7%), and LeetCode Contest (+10.0%) which is a more challenging benchmark created by us. In math reasoning, PF-PPO yields performance increase using different reward models and benchmarks (Ape210K and CMATH). Code is available on this https URL.

View on arXiv
@article{zhang2025_2409.06957,
  title={ Policy Filtration for RLHF to Mitigate Noise in Reward Models },
  author={ Chuheng Zhang and Wei Shen and Li Zhao and Xuyun Zhang and Xiaolong Xu and Wanchun Dou and Jiang Bian },
  journal={arXiv preprint arXiv:2409.06957},
  year={ 2025 }
}
Comments on this paper