95
111

On the Optimality of Averaging in Distributed Statistical Learning

Abstract

A common approach to statistical learning on big data is to randomly distribute it among mm machines and calculate the parameter of interest by merging their mm individual estimates. Two key questions related to this approach are: What is the optimal aggregation procedure, and what is the accuracy loss in comparison to centralized computation. We make several contributions to these questions, under the general framework of empirical risk minimization, a.k.a. M-estimation. As data is abundant, we assume the number of samples per machine, nn, is large and study two asymptotic settings: one where nn \to \infty but the number of estimated parameters pp is fixed, and a second high-dimensional case where both p,np,n\to\infty with p/nκ(0,1)p/n \to \kappa \in (0,1). Our main results include asymptotically exact expressions for the loss incurred by splitting the data, where only bounds were previously available. These are derived independently of the learning algorithm. Consequently, under suitable assumptions in the fixed-pp setting, averaging is {\em first-order equivalent} to a centralized solution, and thus inherits statistical properties like efficiency and robustness. In the high-dimension setting, studied here for the first time in the context of parallelization, a qualitatively different behaviour appears. Parallelized computation generally incurs an accuracy loss, for which we derive a simple approximate formula. We conclude with several practical implications of our results.

View on arXiv
Comments on this paper