ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.07288
29
0

MDIT: A Model-free Data Interpolation Method for Diverse Instruction Tuning

9 April 2025
Yangning Li
Zihua Lan
Lv Qingsong
Yinghui Li
Hai-Tao Zheng
ArXivPDFHTML
Abstract

As Large Language Models (LLMs) are increasingly applied across various tasks, instruction tuning has emerged as a critical method for enhancing model performance. However, current data management strategies face substantial challenges in generating diverse and comprehensive data, restricting further improvements in model performance. To address this gap, we propose MDIT, a novel model-free data interpolation method for diverse instruction tuning, which generates varied and high-quality instruction data by performing task interpolation. Moreover, it contains diversity-based clustering strategies to ensure the diversity of the training data. Extensive experiments show that our method achieves superior performance in multiple benchmark tasks. The LLMs finetuned with MDIT show significant improvements in numerous tasks such as general question answering, math reasoning, and code generation. MDIT offers an efficient and automatic data synthetic method, generating diverse instruction data without depending on external resources while expanding the application potential of LLMs in complex environments.

View on arXiv
@article{li2025_2504.07288,
  title={ MDIT: A Model-free Data Interpolation Method for Diverse Instruction Tuning },
  author={ Yangning Li and Zihua Lan and Lv Qingsong and Yinghui Li and Hai-Tao Zheng },
  journal={arXiv preprint arXiv:2504.07288},
  year={ 2025 }
}
Comments on this paper