A near-optimal stochastic gradient method for decentralized non-convex
finite-sum optimization
This paper describes a - stochastic first-order gradient method for decentralized finite-sum minimization of smooth non-convex functions. Specifically, we propose GT-SARAH that employs a local SARAH-type variance reduction and global gradient tracking to address the stochastic and decentralized nature of the problem. Considering a total number of cost functions, equally divided over a directed network of nodes, we show that GT-SARAH finds an -accurate first-order stationary point in gradient computations across all nodes, independent of the network topology, when , where is the spectral gap of the network weight matrix. In this regime, GT-SARAH is thus, to the best our knowledge, the first decentralized method that achieves the algorithmic lower bound for this class of problems. Moreover, GT-SARAH achieves a - , in that, the total number of gradient computations at each node is reduced by a factor of compared to the near-optimal algorithms for this problem class that process all data at a single node. We also establish the convergence rate of GT-SARAH in other regimes, in terms of the relative sizes of the number of nodes , total number of functions , and the network spectral gap . Over infinite time horizon, we establish the almost sure and mean-squared convergence of GT-SARAH to a first-order stationary point.
View on arXiv