23
17

Parallel Restarted SPIDER -- Communication Efficient Distributed Nonconvex Optimization with Optimal Computation Complexity

Abstract

In this paper, we propose a distributed algorithm for stochastic smooth, non-convex optimization. We assume a worker-server architecture where NN nodes, each having nn (potentially infinite) number of samples, collaborate with the help of a central server to perform the optimization task. The global objective is to minimize the average of local cost functions available at individual nodes. The proposed approach is a non-trivial extension of the popular parallel-restarted SGD algorithm, incorporating the optimal variance-reduction based SPIDER gradient estimator into it. We prove convergence of our algorithm to a first-order stationary solution. The proposed approach achieves the best known communication complexity O(ϵ1)O(\epsilon^{-1}) along with the optimal computation complexity. For finite-sum problems (finite nn), we achieve the optimal computation (IFO) complexity O(Nnϵ1)O(\sqrt{Nn}\epsilon^{-1}). For online problems (nn unknown or infinite), we achieve the optimal IFO complexity O(ϵ3/2)O(\epsilon^{-3/2}). In both the cases, we maintain the linear speedup achieved by existing methods. This is a massive improvement over the O(ϵ2)O(\epsilon^{-2}) IFO complexity of the existing approaches. Additionally, our algorithm is general enough to allow non-identical distributions of data across workers, as in the recently proposed federated learning paradigm.

View on arXiv
Comments on this paper