121
26

Excess-Risk of Distributed Stochastic Learners

Abstract

This work studies the learning ability of consensus and diffusion distributed learners from continuous streams of data arising from different but related statistical distributions. Four distinctive features for diffusion learners are revealed in relation to other decentralized schemes even under left-stochastic combination policies. First, closed-form expressions for the evolution of their excess-risk are derived for strongly-convex risk functions under a diminishing step-size rule. Using these results, it is shown that the diffusion strategy improves the asymptotic convergence rate of the excess-risk relative to non-cooperative schemes. It is also shown that when the in-network cooperation rules are designed optimally, the performance of the diffusion implementation can outperform that of naive centralized processing. The arguments further show that diffusion outperforms consensus strategies by reducing the overshoot during the transient phase of the learning process and asymptotically as well. The framework adopted in this work studies convergence in the stronger mean-square-error sense, rather than in distribution, and develops tools that enable a close examination of the differences between distributed strategies in terms of asymptotic behavior, as well as in terms of convergence rates. This is achieved by exploiting properties of Gamma functions and the convergence properties of products of infinitely many scaling coefficients.

View on arXiv
Comments on this paper