36
0

Mitigating Reward Over-optimization in Direct Alignment Algorithms with Importance Sampling

Abstract

Direct Alignment Algorithms (DAAs) such as Direct Preference Optimization (DPO) have emerged as alternatives to the standard Reinforcement Learning from Human Feedback (RLHF) for aligning large language models (LLMs) with human values. However, these methods are more susceptible to over-optimization, in which the model drifts away from the reference policy, leading to degraded performance as training progresses. This paper proposes a novel importance-sampling approach to mitigate the over-optimization problem of offline DAAs. This approach, called (IS-DAAs), multiplies the DAA objective with an importance ratio that accounts for the reference policy distribution. IS-DAAs additionally avoid the high variance issue associated with importance sampling by clipping the importance ratio to a maximum value. Our extensive experiments demonstrate that IS-DAAs can effectively mitigate over-optimization, especially under low regularization strength, and achieve better performance than other methods designed to address this problem. Our implementations are provided publicly at this link.

View on arXiv
@article{nguyen2025_2506.08681,
  title={ Mitigating Reward Over-optimization in Direct Alignment Algorithms with Importance Sampling },
  author={ Phuc Minh Nguyen and Ngoc-Hieu Nguyen and Duy H. M. Nguyen and Anji Liu and An Mai and Binh T. Nguyen and Daniel Sonntag and Khoa D. Doan },
  journal={arXiv preprint arXiv:2506.08681},
  year={ 2025 }
}
Main:11 Pages
7 Figures
Bibliography:3 Pages
2 Tables
Appendix:3 Pages
Comments on this paper