ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.11419
87
0

InsBank: Evolving Instruction Subset for Ongoing Alignment

17 February 2025
Jiayi Shi
Yiwei Li
Shaoxiong Feng
Peiwen Yuan
X. U. Wang
Y. Zhang
Chuyi Tan
Boyuan Pan
Huan Ren
Yao Hu
Kan Li
    ALM
ArXivPDFHTML
Abstract

Large language models (LLMs) typically undergo instruction tuning to enhance alignment. Recent studies emphasize that quality and diversity of instruction data are more crucial than quantity, highlighting the need to select diverse, high-quality subsets to reduce training costs. However, how to evolve these selected subsets alongside the development of new instruction data remains insufficiently explored. To achieve LLMs' ongoing alignment, we introduce Instruction Bank (InsBank), a continuously updated repository that integrates the latest valuable instruction data. We further propose Progressive Instruction Bank Evolution (PIBE), a novel framework designed to evolve InsBank effectively and efficiently over time. PIBE employs a gradual data selection strategy to maintain long-term efficiency, leveraging a representation-based diversity score to capture relationships between data points and retain historical information for comprehensive diversity evaluation. This also allows for flexible combination of diversity and quality scores during data selection and ranking. Extensive experiments demonstrate that PIBE significantly outperforms baselines in InsBank evolution and is able to extract budget-specific subsets, demonstrating its effectiveness and adaptability.

View on arXiv
@article{shi2025_2502.11419,
  title={ InsBank: Evolving Instruction Subset for Ongoing Alignment },
  author={ Jiayi Shi and Yiwei Li and Shaoxiong Feng and Peiwen Yuan and Xinglin Wang and Yueqi Zhang and Chuyi Tan and Boyuan Pan and Huan Ren and Yao Hu and Kan Li },
  journal={arXiv preprint arXiv:2502.11419},
  year={ 2025 }
}
Comments on this paper