22
116

First-order Stochastic Algorithms for Escaping From Saddle Points in Almost Linear Time

Abstract

Two classes of methods have been proposed for escaping from saddle points with one using the second-order information carried by the Hessian and the other adding the noise into the first-order information. The existing analysis for algorithms using noise in the first-order information is quite involved and hides the essence of added noise, which hinder further improvements of these algorithms. In this paper, we present a novel perspective of noise-adding technique, i.e., adding the noise into the first-order information can help extract the negative curvature from the Hessian matrix, and provide a formal reasoning of this perspective by analyzing a simple first-order procedure. More importantly, the proposed procedure enables one to design purely first-order stochastic algorithms for escaping from non-degenerate saddle points with a much better time complexity (almost linear time in terms of the problem's dimensionality). In particular, we develop a {\bf first-order stochastic algorithm} based on our new technique and an existing algorithm that only converges to a first-order stationary point to enjoy a time complexity of {O~(d/ϵ3.5)\widetilde O(d/\epsilon^{3.5}) for finding a nearly second-order stationary point x\bf{x} such that F(bfx)ϵ\|\nabla F(bf{x})\|\leq \epsilon and 2F(bfx)ϵI\nabla^2 F(bf{x})\geq -\sqrt{\epsilon}I (in high probability), where F()F(\cdot) denotes the objective function and dd is the dimensionality of the problem. To the best of our knowledge, this is the best theoretical result of first-order algorithms for stochastic non-convex optimization, which is even competitive with if not better than existing stochastic algorithms hinging on the second-order information.

View on arXiv
Comments on this paper