22
0

GFRIEND: Generative Few-shot Reward Inference through EfficieNt DPO

Main:7 Pages
2 Figures
Bibliography:3 Pages
5 Tables
Appendix:5 Pages
Abstract

The ability to train high-performing reward models with few-shot data is critical for enhancing the efficiency and scalability of Reinforcement Learning from Human Feedback (RLHF). We propose a data augmentation and expansion framework that enables generative reward models trained on small datasets to achieve comparable performance to those trained on large-scale datasets. Traditional methods to train a generative reward model, such as Direct Preference Optimization (DPO), are constrained by inefficiencies in sample pairing and limited data diversity. This work introduces preference refinement, which employs Chain-of-Thought (CoT) sampling to uncover diverse and high-quality preference relationships. It also incorporates a perplexity-based scoring mechanism to assign nuanced preference levels and utilizes Multi-level Direct Preference Optimization (M-DPO) to enable the model to capture finer-grained preference differences between samples. Experimental results demonstrate that the proposed method significantly enhances data efficiency and model performance, enabling reward models trained in a few-shot setting to achieve results on par with those trained on large-scale datasets. This study underscores the potential of data-efficient strategies in advancing reward model optimization, offering a robust solution for low-resource RLHF applications.

View on arXiv
@article{zhao2025_2506.08965,
  title={ GFRIEND: Generative Few-shot Reward Inference through EfficieNt DPO },
  author={ Yiyang Zhao and Huiyu Bai and Xuejiao Zhao },
  journal={arXiv preprint arXiv:2506.08965},
  year={ 2025 }
}
Comments on this paper