28
167

How To Make the Gradients Small Stochastically: Even Faster Convex and Nonconvex SGD

Abstract

Stochastic gradient descent (SGD) gives an optimal convergence rate when minimizing convex stochastic objectives f(x)f(x). However, in terms of making the gradients small, the original SGD does not give an optimal rate, even when f(x)f(x) is convex. If f(x)f(x) is convex, to find a point with gradient norm ε\varepsilon, we design an algorithm SGD3 with a near-optimal rate O~(ε2)\tilde{O}(\varepsilon^{-2}), improving the best known rate O(ε8/3)O(\varepsilon^{-8/3}) of [18]. If f(x)f(x) is nonconvex, to find its ε\varepsilon-approximate local minimum, we design an algorithm SGD5 with rate O~(ε3.5)\tilde{O}(\varepsilon^{-3.5}), where previously SGD variants only achieve O~(ε4)\tilde{O}(\varepsilon^{-4}) [6, 15, 33]. This is no slower than the best known stochastic version of Newton's method in all parameter regimes [30].

View on arXiv
Comments on this paper