66
0

Low-Confidence Gold: Refining Low-Confidence Samples for Efficient Instruction Tuning

Abstract

The effectiveness of instruction fine-tuning for Large Language Models is fundamentally constrained by the quality and efficiency of training datasets. This work introduces Low-Confidence Gold (LCG), a novel filtering framework that employs centroid-based clustering and confidence-guided selection for identifying valuable instruction pairs. Through a semi-supervised approach using a lightweight classifier trained on representative samples, LCG curates high-quality subsets while preserving data diversity. Experimental evaluation demonstrates that models fine-tuned on LCG-filtered subsets of 6K samples achieve superior performance compared to existing methods, with substantial improvements on MT-bench and consistent gains across comprehensive evaluation metrics. The framework's efficacy while maintaining model performance establishes a promising direction for efficient instruction tuning.

View on arXiv
@article{cai2025_2502.18978,
  title={ Low-Confidence Gold: Refining Low-Confidence Samples for Efficient Instruction Tuning },
  author={ Hongyi Cai and Jie Li and Wenzhen Dong },
  journal={arXiv preprint arXiv:2502.18978},
  year={ 2025 }
}
Comments on this paper