Instruction data selection aims to identify a high-quality subset from the training set that matches or exceeds the performance of the full dataset on target tasks. Existing methods focus on the instruction-to-response mapping, but neglect the human preference for diverse responses. In this paper, we propose Preference-oriented Data Selection method (ProDS) that scores training samples based on their alignment with preferences observed in the target set. Our key innovation lies in shifting the data selection criteria from merely estimating features for accurate response generation to explicitly aligning training samples with human preferences in target tasks. Specifically, direct preference optimization (DPO) is employed to estimate human preferences across diverse responses. Besides, a bidirectional preference synthesis strategy is designed to score training samples according to both positive preferences and negative preferences. Extensive experimental results demonstrate our superiority to existing task-agnostic and targeted methods.
View on arXiv@article{guo2025_2505.12754, title={ ProDS: Preference-oriented Data Selection for Instruction Tuning }, author={ Wenya Guo and Zhengkun Zhang and Xumeng Liu and Ying Zhang and Ziyu Lu and Haoze Zhu and Xubo Liu and Ruxue Yan }, journal={arXiv preprint arXiv:2505.12754}, year={ 2025 } }