348

Beyond SGD: Iterate Averaged Adaptive Gradient Method

Journal of machine learning research (JMLR), 2020
Abstract

The objective of this work is to show that adaptive methods, when combined with decoupled weight decay and iterate averaging, can be competitive with finely tuned SGD. We posit that the effectiveness of this combination hinges on the solution diversity which comprises the iterate average, a property which we demonstrate explicitly on deep neural networks. We further find that partially adaptive algorithms with iterate averaging give significantly better results than SGD with iterate averaging, require less tuning and are less prone to over-fitting to the training set and hence do not require early stopping or validation set monitoring. We showcase the efficacy of our approach on the CIFAR-10/100, ImageNet and Penn Treebank datasets.

View on arXiv
Comments on this paper