ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.12754
7
0

ProDS: Preference-oriented Data Selection for Instruction Tuning

19 May 2025
Wenya Guo
Zhengkun Zhang
Xumeng Liu
Ying Zhang
Ziyu Lu
Haoze Zhu
Xubo Liu
Ruxue Yan
ArXivPDFHTML
Abstract

Instruction data selection aims to identify a high-quality subset from the training set that matches or exceeds the performance of the full dataset on target tasks. Existing methods focus on the instruction-to-response mapping, but neglect the human preference for diverse responses. In this paper, we propose Preference-oriented Data Selection method (ProDS) that scores training samples based on their alignment with preferences observed in the target set. Our key innovation lies in shifting the data selection criteria from merely estimating features for accurate response generation to explicitly aligning training samples with human preferences in target tasks. Specifically, direct preference optimization (DPO) is employed to estimate human preferences across diverse responses. Besides, a bidirectional preference synthesis strategy is designed to score training samples according to both positive preferences and negative preferences. Extensive experimental results demonstrate our superiority to existing task-agnostic and targeted methods.

View on arXiv
@article{guo2025_2505.12754,
  title={ ProDS: Preference-oriented Data Selection for Instruction Tuning },
  author={ Wenya Guo and Zhengkun Zhang and Xumeng Liu and Ying Zhang and Ziyu Lu and Haoze Zhu and Xubo Liu and Ruxue Yan },
  journal={arXiv preprint arXiv:2505.12754},
  year={ 2025 }
}
Comments on this paper