19
0

On Symmetric Losses for Robust Policy Optimization with Noisy Preferences

Abstract

Optimizing policies based on human preferences is key to aligning language models with human intent. This work focuses on reward modeling, a core component in reinforcement learning from human feedback (RLHF), and offline preference optimization, such as direct preference optimization. Conventional approaches typically assume accurate annotations. However, real-world preference data often contains noise due to human errors or biases. We propose a principled framework for robust policy optimization under noisy preferences, viewing reward modeling as a classification problem. This allows us to leverage symmetric losses, known for their robustness to label noise in classification, leading to our Symmetric Preference Optimization (SymPO) method. We prove that symmetric losses enable successful policy optimization even under noisy labels, as the resulting reward remains rank-preserving -- a property sufficient for policy improvement. Experiments on synthetic and real-world tasks demonstrate the effectiveness of SymPO.

View on arXiv
@article{nishimori2025_2505.24709,
  title={ On Symmetric Losses for Robust Policy Optimization with Noisy Preferences },
  author={ Soichiro Nishimori and Yu-Jie Zhang and Thanawat Lodkaew and Masashi Sugiyama },
  journal={arXiv preprint arXiv:2505.24709},
  year={ 2025 }
}
Comments on this paper