36

Progressive Residual Warmup for Language Model Pretraining

Tianhao Chen
Xin Xu
Lu Yin
Hao Chen
Yang Wang
Shizhe Diao
Can Yang
Main:8 Pages
7 Figures
Bibliography:3 Pages
7 Tables
Appendix:2 Pages
Abstract

Transformer architectures serve as the backbone for most modern Large Language Models, therefore their pretraining stability and convergence speed are of central concern. Motivated by the logical dependency of sequentially stacked layers, we propose Progressive Residual Warmup (ProRes) for language model pretraining. ProRes implements an "early layer learns first" philosophy by multiplying each layer's residual with a scalar that gradually warms up from 0 to 1, with deeper layers taking longer warmup steps. In this way, deeper layers wait for early layers to settle into a more stable regime before contributing to learning. We demonstrate the effectiveness of ProRes through pretraining experiments across various model scales, as well as normalization and initialization schemes. Comprehensive analysis shows that ProRes not only stabilizes pretraining but also introduces a unique optimization trajectory, leading to faster convergence, stronger generalization and better downstream performance. Our code is available atthis https URL.

View on arXiv
Comments on this paper