34
2

Distributed Event-Based Learning via ADMM

Abstract

We consider a distributed learning problem, where agents minimize a global objective function by exchanging information over a network. Our approach has two distinct features: (i) It substantially reduces communication by triggering communication only when necessary, and (ii) it is agnostic to the data-distribution among the different agents. We therefore guarantee convergence even if the local data-distributions of the agents are arbitrarily distinct. We analyze the convergence rate of the algorithm both in convex and nonconvex settings and derive accelerated convergence rates for the convex case. We also characterize the effect of communication failures and demonstrate that our algorithm is robust to these. The article concludes by presenting numerical results from distributed learning tasks on the MNIST and CIFAR-10 datasets. The experiments underline communication savings of 35% or more due to the event-based communication strategy, show resilience towards heterogeneous data-distributions, and highlight that our approach outperforms common baselines such as FedAvg, FedProx, SCAFFOLD and FedADMM.

View on arXiv
@article{er2025_2405.10618,
  title={ Distributed Event-Based Learning via ADMM },
  author={ Guner Dilsad Er and Sebastian Trimpe and Michael Muehlebach },
  journal={arXiv preprint arXiv:2405.10618},
  year={ 2025 }
}
Comments on this paper