59
0

Understanding Why Adam Outperforms SGD: Gradient Heterogeneity in Transformers

Abstract

Transformers are challenging to optimize with SGD and typically require adaptive optimizers such as Adam. However, the reasons behind the superior performance of Adam over SGD remain unclear. In this study, we investigate the optimization of transformers by focusing on gradient heterogeneity, defined as the disparity in gradient norms among parameters. Our analysis shows that gradient heterogeneity hinders gradient-based optimization, including SGD, while sign-based optimization, a simplified variant of Adam, is less affected. We further examine gradient heterogeneity in transformers and show that it is influenced by the placement of layer normalization. Experimental results from fine-tuning transformers in both NLP and vision domains validate our theoretical analyses. This study provides insights into the optimization challenges of transformers and offers guidance for designing future optimization algorithms. Code is available atthis https URL.

View on arXiv
@article{tomihari2025_2502.00213,
  title={ Understanding Why Adam Outperforms SGD: Gradient Heterogeneity in Transformers },
  author={ Akiyoshi Tomihari and Issei Sato },
  journal={arXiv preprint arXiv:2502.00213},
  year={ 2025 }
}
Comments on this paper