SafeDPO: A Simple Approach to Direct Preference Optimization with Enhanced Safety

As Large Language Models (LLMs) continue to advance and find applications across a growing number of fields, ensuring the safety of LLMs has become increasingly critical. To address safety concerns, recent studies have proposed integrating safety constraints into Reinforcement Learning from Human Feedback (RLHF). However, these approaches tend to be complex, as they encompass complicated procedures in RLHF along with additional steps required by the safety constraints. Inspired by Direct Preference Optimization (DPO), we introduce a new algorithm called SafeDPO, which is designed to directly optimize the safety alignment objective in a single stage of policy learning, without requiring relaxation. SafeDPO introduces only one additional hyperparameter to further enhance safety and requires only minor modifications to standard DPO. As a result, it eliminates the need to fit separate reward and cost models or to sample from the language model during fine-tuning, while still enhancing the safety of LLMs. Finally, we demonstrate that SafeDPO achieves competitive performance compared to state-of-the-art safety alignment algorithms, both in terms of aligning with human preferences and improving safety.
View on arXiv@article{kim2025_2505.20065, title={ SafeDPO: A Simple Approach to Direct Preference Optimization with Enhanced Safety }, author={ Geon-Hyeong Kim and Youngsoo Jang and Yu Jin Kim and Byoungjip Kim and Honglak Lee and Kyunghoon Bae and Moontae Lee }, journal={arXiv preprint arXiv:2505.20065}, year={ 2025 } }