12
0

Not All Documents Are What You Need for Extracting Instruction Tuning Data

Abstract

Instruction tuning improves the performance of large language models (LLMs), but it heavily relies on high-quality training data. Recently, LLMs have been used to synthesize instruction data using seed question-answer (QA) pairs. However, these synthesized instructions often lack diversity and tend to be similar to the input seeds, limiting their applicability in real-world scenarios. To address this, we propose extracting instruction tuning data from web corpora that contain rich and diverse knowledge. A naive solution is to retrieve domain-specific documents and extract all QA pairs from them, but this faces two key challenges: (1) extracting all QA pairs using LLMs is prohibitively expensive, and (2) many extracted QA pairs may be irrelevant to the downstream tasks, potentially degrading model performance. To tackle these issues, we introduce EQUAL, an effective and scalable data extraction framework that iteratively alternates between document selection and high-quality QA pair extraction to enhance instruction tuning. EQUAL first clusters the document corpus based on embeddings derived from contrastive learning, then uses a multi-armed bandit strategy to efficiently identify clusters that are likely to contain valuable QA pairs. This iterative approach significantly reduces computational cost while boosting model performance. Experiments on AutoMathText and StackOverflow across four downstream tasks show that EQUAL reduces computational costs by 5-10x and improves accuracy by 2.5 percent on LLaMA-3.1-8B and Mistral-7B

View on arXiv
@article{zhang2025_2505.12250,
  title={ Not All Documents Are What You Need for Extracting Instruction Tuning Data },
  author={ Chi Zhang and Huaping Zhong and Hongtao Li and Chengliang Chai and Jiawei Hong and Yuhao Deng and Jiacheng Wang and Tian Tan and Yizhou Yan and Jiantao Qiu and Ye Yuan and Guoren Wang and Conghui He and Lei Cao },
  journal={arXiv preprint arXiv:2505.12250},
  year={ 2025 }
}
Comments on this paper