As the default optimizer for training large language models, AdamW has achieved remarkable success in deep learning. However, its convergence behavior is not theoretically well-understood. This paper establishes the convergence rate for AdamW measured by norm, where represents the iteration number, denotes the model dimension, and matches the constant in the optimal convergence rate of SGD. Theoretically, we have when each element of is generated from Gaussian distribution . Empirically, our experimental results on real-world deep learning tasks reveal . Both support that our convergence rate can be considered to be analogous to the optimal convergence rate of SGD.
View on arXiv@article{li2025_2505.11840, title={ On the $O(\frac{\sqrt{d}}{K^{1/4}})$ Convergence Rate of AdamW Measured by $\ell_1$ Norm }, author={ Huan Li and Yiming Dong and Zhouchen Lin }, journal={arXiv preprint arXiv:2505.11840}, year={ 2025 } }