During long-duration Large Language Model (LLM) training runs the gradient norm increases rapidly near the end of training. In this short note, we show that this increase is due to an unintended interaction between weight decay, normalization layers, and the learning rate schedule. We propose a simple correction that fixes this behavior while also resulting in lower loss values throughout training.
View on arXiv@article{defazio2025_2506.02285, title={ Why Gradients Rapidly Increase Near the End of Training }, author={ Aaron Defazio }, journal={arXiv preprint arXiv:2506.02285}, year={ 2025 } }