13
0

Beyond Random Sampling: Efficient Language Model Pretraining via Curriculum Learning

Main:8 Pages
15 Figures
Bibliography:3 Pages
1 Tables
Appendix:6 Pages
Abstract

Curriculum learning has shown promise in improving training efficiency and generalization in various machine learning domains, yet its potential in pretraining language models remains underexplored, prompting our work as the first systematic investigation in this area. We experimented with different settings, including vanilla curriculum learning, pacing-based sampling, and interleaved curricula-guided by six difficulty metrics spanning linguistic and information-theoretic perspectives. We train models under these settings and evaluate their performance on eight diverse benchmarks. Our experiments reveal that curriculum learning consistently improves convergence in early and mid-training phases, and can yield lasting gains when used as a warmup strategy with up to 3.5%3.5\% improvement. Notably, we identify compression ratio, lexical diversity, and readability as effective difficulty signals across settings. Our findings highlight the importance of data ordering in large-scale pretraining and provide actionable insights for scalable, data-efficient model development under realistic training scenarios.

View on arXiv
@article{zhang2025_2506.11300,
  title={ Beyond Random Sampling: Efficient Language Model Pretraining via Curriculum Learning },
  author={ Yang Zhang and Amr Mohamed and Hadi Abdine and Guokan Shang and Michalis Vazirgiannis },
  journal={arXiv preprint arXiv:2506.11300},
  year={ 2025 }
}
Comments on this paper