Simplify RLHF as Reward-Weighted SFT: A Variational Method

Reinforcement Learning from Human Feedback (RLHF) is crucial for aligning Large Language Models (LLMs) with human values. However, RLHF has been continuously challenged by its high complexity in implementation and computation consumption. Even with recent simplifications, such as Direct Preference Optimization (DPO) and Advantage Leftover Lunch (A-LoL), the problems of over-fitting and training instability remain hindering the alignment process from the expected optimal performance. To address the existing challenges, we propose a novel simplification of RLHF from the perspective of variational inference, called ariational lignment with e-weighting (). More specifically, by directly minimizing the distribution gap between the learning LLM policy and the optimal solution of RLHF, we transform the alignment objective into a reward-driven re-weighted supervised fine-tuning (SFT) form, which only requires minor adjustment on the SFT loss to obtain noticeable improvement on training stability and effectiveness. On comprehensive alignment and generation benchmarks, our VAR method has numerically achieved competitive performance in LLM alignment helpfulness and harmlessness.
View on arXiv@article{du2025_2502.11026, title={ Simplify RLHF as Reward-Weighted SFT: A Variational Method }, author={ Yuhao Du and Zhuo Li and Pengyu Cheng and Zhihong Chen and Yuejiao Xie and Xiang Wan and Anningzhe Gao }, journal={arXiv preprint arXiv:2502.11026}, year={ 2025 } }