ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.00742
559
94
v1v2v3v4v5 (latest)

Collaborative Learning as an Agreement Problem

3 August 2020
El-Mahdi El-Mhamdi
Sadegh Farhadkhani
R. Guerraoui
Arsany Guirguis
L. Hoang
Sébastien Rouault
    FedML
ArXiv (abs)PDFHTML
Abstract

We address the problem of Byzantine collaborative learning: a set of nnn nodes seek to collectively learn from each others' local data. The data distribution may vary from one node to another. No node is trusted and f<nf < nf<n nodes can behave arbitrarily, i.e., they can be Byzantine. We prove that collaborative learning is equivalent to a new and weak form of agreement, which we call averaging agreement. In this problem, nodes start each with an initial vector and seek to approximately agree on a common vector, which is close to the average of honest nodes' initial vectors. More precisely, the "error" must remain within a multiplicative constant (which we call averaging constant) of the maximum ℓ2\ell_2ℓ2​ distance between the honest nodes' initial vectors. Essentially, the smaller the averaging constant, the better the learning. We present two asynchronous solutions to averaging agreement, each we prove optimal according to some dimension. The first, based on the minimum-diameter averaging, requires $ n \geq 6f+1$, but achieves asymptotically the best-possible averaging constant up to a multiplicative constant. The second, based on reliable broadcast and coordinate-wise trimmed mean, achieves optimal Byzantine resilience, i.e., n≥3f+1n \geq 3f+1n≥3f+1. Each of these algorithms induces an optimal collaborative learning protocol.

View on arXiv
Comments on this paper