ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.04634
40
34

One-Shot Federated Learning: Theoretical Limits and Algorithms to Achieve Them

12 May 2019
Saber Salehkaleybar
Arsalan Sharifnassab
S. J. Golestani
    FedML
ArXivPDFHTML
Abstract

We consider distributed statistical optimization in one-shot setting, where there are mmm machines each observing nnn i.i.d. samples. Based on its observed samples, each machine sends a BBB-bit-long message to a server. The server then collects messages from all machines, and estimates a parameter that minimizes an expected convex loss function. We investigate the impact of communication constraint, BBB, on the expected error and derive a tight lower bound on the error achievable by any algorithm. We then propose an estimator, which we call Multi-Resolution Estimator (MRE), whose expected error (when B≥log⁡mnB\ge\log mnB≥logmn) meets the aforementioned lower bound up to poly-logarithmic factors, and is thereby order optimal. We also address the problem of learning under tiny communication budget, and present lower and upper error bounds when BBB is a constant. The expected error of MRE, unlike existing algorithms, tends to zero as the number of machines (mmm) goes to infinity, even when the number of samples per machine (nnn) remains upper bounded by a constant. This property of the MRE algorithm makes it applicable in new machine learning paradigms where mmm is much larger than nnn.

View on arXiv
Comments on this paper