7
0

DCRM: A Heuristic to Measure Response Pair Quality in Preference Optimization

Main:7 Pages
11 Figures
Bibliography:3 Pages
21 Tables
Appendix:8 Pages
Abstract

Recent research has attempted to associate preference optimization (PO) performance with the underlying preference datasets. In this work, our observation is that the differences between the preferred response y+y^+ and dispreferred response yy^- influence what LLMs can learn, which may not match the desirable differences to learn. Therefore, we use distance and reward margin to quantify these differences, and combine them to get Distance Calibrated Reward Margin (DCRM), a metric that measures the quality of a response pair for PO. Intuitively, DCRM encourages minimal noisy differences and maximal desired differences. With this, we study 3 types of commonly used preference datasets, classified along two axes: the source of the responses and the preference labeling function. We establish a general correlation between higher DCRM of the training set and better learning outcome. Inspired by this, we propose a best-of-N2N^2 pairing method that selects response pairs with the highest DCRM. Empirically, in various settings, our method produces training datasets that can further improve models' performance on AlpacaEval, MT-Bench, and Arena-Hard over the existing training sets.

View on arXiv
@article{huang2025_2506.14157,
  title={ DCRM: A Heuristic to Measure Response Pair Quality in Preference Optimization },
  author={ Chengyu Huang and Tanya Goyal },
  journal={arXiv preprint arXiv:2506.14157},
  year={ 2025 }
}
Comments on this paper