10
0

DataRater: Meta-Learned Dataset Curation

Abstract

The quality of foundation models depends heavily on their training data. Consequently, great efforts have been put into dataset curation. Yet most approaches rely on manual tuning of coarse-grained mixtures of large buckets of data, or filtering by hand-crafted heuristics. An approach that is ultimately more scalable (let alone more satisfying) is to \emph{learn} which data is actually valuable for training. This type of meta-learning could allow more sophisticated, fine-grained, and effective curation. Our proposed \emph{DataRater} is an instance of this idea. It estimates the value of training on any particular data point. This is done by meta-learning using `meta-gradients', with the objective of improving training efficiency on held out data. In extensive experiments across a range of model scales and datasets, we find that using our DataRater to filter data is highly effective, resulting in significantly improved compute efficiency.

View on arXiv
@article{calian2025_2505.17895,
  title={ DataRater: Meta-Learned Dataset Curation },
  author={ Dan A. Calian and Gregory Farquhar and Iurii Kemaev and Luisa M. Zintgraf and Matteo Hessel and Jeremy Shar and Junhyuk Oh and András György and Tom Schaul and Jeffrey Dean and Hado van Hasselt and David Silver },
  journal={arXiv preprint arXiv:2505.17895},
  year={ 2025 }
}
Comments on this paper