61
0
v1v2 (latest)

On the Convergence of Federated Averaging under Partial Participation for Over-parameterized Neural Networks

Abstract

Federated learning (FL) is a widely employed distributed paradigm for collaboratively training machine learning models from multiple clients without sharing local data. In practice, FL encounters challenges in dealing with partial client participation due to the limited bandwidth, intermittent connection and strict synchronized delay. Simultaneously, there exist few theoretical convergence guarantees in this practical setting, especially when associated with the non-convex optimization of neural networks. To bridge this gap, we focus on the training problem of federated averaging (FedAvg) method for two canonical models: a deep linear network and a two-layer ReLU network. Under the over-parameterized assumption, we provably show that FedAvg converges to a global minimum at a linear rate O((1mini[t]SiN2)t)\mathcal{O}\left((1-\frac{min_{i \in [t]}|S_i|}{N^2})^t\right) after tt iterations, where NN is the number of clients and Si|S_i| is the number of the participated clients in the ii-th iteration. Experimental evaluations confirm our theoretical results.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.