129
1

Compute-Constrained Data Selection

Abstract

Data selection can reduce the amount of training data needed to finetune LLMs; however, the efficacy of data selection scales directly with its compute. Motivated by the practical challenge of compute-constrained finetuning, we consider the setting in which both the cost of selecting data and training are budgeted for. We first formalize the problem of data selection with a cost-aware utility function, and model the data selection problem as trading off initial-selection cost for training gain. We run a comprehensive sweep of experiments across multiple tasks, varying compute budget by scaling finetuning tokens, model sizes, and data selection compute. These experiments show the validity of this model in real-world experiments. Interestingly we find that many powerful data selection methods are almost never compute-optimal, and that cheaper data selection alternatives dominate both from a theoretical and empirical perspective.

View on arXiv
@article{yin2025_2410.16208,
  title={ Compute-Constrained Data Selection },
  author={ Junjie Oscar Yin and Alexander M. Rush },
  journal={arXiv preprint arXiv:2410.16208},
  year={ 2025 }
}
Comments on this paper