109
7

Bridging and Modeling Correlations in Pairwise Data for Direct Preference Optimization

Yasheng Wang
Lifeng Shang
Ruiming Tang
Wei Wang
Abstract

Direct preference optimization (DPO), a widely adopted offline preference optimization algorithm, aims to align large language models (LLMs) with human-desired behaviors using pairwise preference data. However, the winning response and the losing response within pairwise data are generated isolatedly, leading to weak correlations between them as well as suboptimal alignment performance. To address this issue, we propose an effective framework named BMC, for bridging and modeling correlations in pairwise data. Firstly, we increase the consistency and informativeness of the pairwise preference signals by targeted modifications, synthesizing a pseudo winning response through improving the losing response based on the winning response. Secondly, we identify that DPO alone is insufficient to model these correlations and capture nuanced variations. Therefore, we propose learning token-level correlations by dynamically leveraging the policy model's confidence during training. Comprehensive experiments on QA, math, and instruction-following tasks demonstrate the effectiveness of our approach, significantly surpassing competitive baselines, including DPO. Additionally, our in-depth quantitative analysis reveals the reasons behind our method's superior performance over DPO and showcases its versatility to other DPO variants.

View on arXiv
@article{jiang2025_2408.07471,
  title={ Bridging and Modeling Correlations in Pairwise Data for Direct Preference Optimization },
  author={ Yuxin Jiang and Bo Huang and Yufei Wang and Xingshan Zeng and Liangyou Li and Yasheng Wang and Xin Jiang and Lifeng Shang and Ruiming Tang and Wei Wang },
  journal={arXiv preprint arXiv:2408.07471},
  year={ 2025 }
}
Comments on this paper