We consider the decentralized stochastic optimization problems, where a network of nodes, each owning a local cost function, cooperate to find a minimizer of the globally-averaged cost. A widely studied decentralized algorithm for this problem is decentralized SGD (D-SGD), in which each node averages only with its neighbors. D-SGD is efficient in single-iteration communication, but it is very sensitive to the network topology. For smooth objective functions, the transient stage (which measures the number of iterations the algorithm has to experience before achieving the linear speedup stage) of D-SGD is on the order of and for strongly and generally convex cost functions, respectively, where is a topology-dependent quantity that approaches for a large and sparse network. Hence, D-SGD suffers from slow convergence for large and sparse networks. In this work, we study the non-asymptotic convergence property of the D/Exact-diffusion algorithm. By eliminating the influence of data heterogeneity between nodes, D/Exact-diffusion is shown to have an enhanced transient stage that is on the order of and for strongly and generally convex cost functions, respectively. Moreover, when D/Exact-diffusion is implemented with gradient accumulation and multi-round gossip communications, its transient stage can be further improved to and for strongly and generally convex cost functions, respectively. These established results for D/Exact-Diffusion have the best (i.e., weakest) dependence on network topology to our knowledge compared to existing decentralized algorithms. We also conduct numerical simulations to validate our theories.
View on arXiv