18
2

Analyzing & Reducing the Need for Learning Rate Warmup in GPT Training

Abstract

Learning Rate Warmup is a popular heuristic for training neural networks, especially at larger batch sizes, despite limited understanding of its benefits. Warmup decreases the update size Δwt=ηtut\Delta \mathbf{w}_t = \eta_t \mathbf{u}_t early in training by using lower values for the learning rate ηt\eta_t. In this work we argue that warmup benefits training by keeping the overall size of Δwt\Delta \mathbf{w}_t limited, counteracting large initial values of ut\mathbf{u}_t. Focusing on small-scale GPT training with AdamW/Lion, we explore the following question: Why and by which criteria are early updates ut\mathbf{u}_t too large? We analyze different metrics for the update size including the 2\ell_2-norm, resulting directional change, and impact on the representations of the network, providing a new perspective on warmup. In particular, we find that warmup helps counteract large angular updates as well as a limited critical batch size early in training. Finally, we show that the need for warmup can be significantly reduced or eliminated by modifying the optimizer to explicitly normalize ut\mathbf{u}_t based on the aforementioned metrics.

View on arXiv
Comments on this paper