A Simple Convergence Proof of Adam and Adagrad
We provide a simple proof of convergence covering both the Adam and Adagrad adaptive optimization algorithms when applied to smooth (possibly non-convex) objective functions with bounded gradients. We show that in expectation, the squared norm of the objective gradient averaged over the trajectory has an upper-bound which is explicit in the constants of the problem, parameters of the optimizer and the total number of iterations . This bound can be made arbitrarily small: Adam with a learning rate and a momentum parameter on squared gradients achieves the same rate of convergence as Adagrad. Finally, we obtain the tightest dependency on the heavy ball momentum among all previous convergence bounds for non-convex Adam and Adagrad, improving from to . Our technique also improves the best known dependency for standard SGD by a factor .
View on arXiv