ActiveDPO: Active Direct Preference Optimization for Sample-Efficient Alignment

The recent success of using human preferences to align large language models (LLMs) has significantly improved their performance in various downstream tasks like question answering, mathematical reasoning, and code generation. However,3 achieving effective LLM alignment depends on high-quality human preference datasets. Collecting these datasets requires human preference annotation, which is costly and resource-intensive, necessitating efficient active data selection methods. Existing methods either lack a strong theoretical foundation or depend on restrictive reward function assumptions (e.g., linearity). To this end, we propose an algorithm, ActiveDPO, that uses a theoretically grounded data selection criterion for non-linear reward functions while directly leveraging the LLM itself to parameterize the reward model that is used for active data selection. As a result, ActiveDPO explicitly accounts for the influence of LLM on data selection, unlike methods that select the data without considering the LLM that is being aligned, thereby leading to more effective and efficient data collection. Extensive experiments show that ActiveDPO outperforms existing methods across various models and datasets.
View on arXiv@article{lin2025_2505.19241, title={ ActiveDPO: Active Direct Preference Optimization for Sample-Efficient Alignment }, author={ Xiaoqiang Lin and Arun Verma and Zhongxiang Dai and Daniela Rus and See-Kiong Ng and Bryan Kian Hsiang Low }, journal={arXiv preprint arXiv:2505.19241}, year={ 2025 } }