10
1

EvoLM: In Search of Lost Language Model Training Dynamics

Main:12 Pages
13 Figures
Bibliography:6 Pages
11 Tables
Appendix:10 Pages
Abstract

Modern language model (LM) training has been divided into multiple stages, making it difficult for downstream developers to evaluate the impact of design choices made at each stage. We present EvoLM, a model suite that enables systematic and transparent analysis of LMs' training dynamics across pre-training, continued pre-training, supervised fine-tuning, and reinforcement learning. By training over 100 LMs with 1B and 4B parameters from scratch, we rigorously evaluate both upstream (language modeling) and downstream (problem-solving) reasoning capabilities, including considerations of both in-domain and out-of-domain generalization. Key insights highlight the diminishing returns from excessive pre-training and post-training, the importance and practices of mitigating forgetting during domain-specific continued pre-training, the crucial role of continued pre-training in bridging pre-training and post-training phases, and various intricate trade-offs when configuring supervised fine-tuning and reinforcement learning. To facilitate open research and reproducibility, we release all pre-trained and post-trained models, training datasets for all stages, and our entire training and evaluation pipeline.

View on arXiv
@article{qi2025_2506.16029,
  title={ EvoLM: In Search of Lost Language Model Training Dynamics },
  author={ Zhenting Qi and Fan Nie and Alexandre Alahi and James Zou and Himabindu Lakkaraju and Yilun Du and Eric Xing and Sham Kakade and Hanlin Zhang },
  journal={arXiv preprint arXiv:2506.16029},
  year={ 2025 }
}
Comments on this paper