We introduce MaSS (Momentum-added Stochastic Solver), an accelerated SGD method for optimizing over-parametrized models. Our method is simple and efficient to implement and does not require adapting hyper-parameters or computing full gradients in the course of optimization. Experimental evaluation of MaSS for several standard architectures of deep networks, including ResNet and convolutional networks, shows improved performance over Adam and SGD both in optimization and generalization. We prove accelerated convergence of MaSS over SGD and provide analysis for hyper-parameter selection in the quadratic case as well as some results in general strongly convex setting. In contrast, we show theoretically and verify empirically that the standard SGD+Nesterov can diverge for common choices of hyper-parameter values. We also analyze the practically important question of the dependence of the convergence rate and optimal hyper-parameters as functions of the mini-batch size, demonstrating three distinct regimes: linear scaling, diminishing returns and saturation.
View on arXiv