ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.04061
27
82

Faster Non-Convex Federated Learning via Global and Local Momentum

7 December 2020
Rudrajit Das
Anish Acharya
Abolfazl Hashemi
Sujay Sanghavi
Inderjit S. Dhillon
Ufuk Topcu
    FedML
ArXivPDFHTML
Abstract

We propose \texttt{FedGLOMO}, a novel federated learning (FL) algorithm with an iteration complexity of O(ϵ−1.5)\mathcal{O}(\epsilon^{-1.5})O(ϵ−1.5) to converge to an ϵ\epsilonϵ-stationary point (i.e., E[∥∇f(x)∥2]≤ϵ\mathbb{E}[\|\nabla f(\bm{x})\|^2] \leq \epsilonE[∥∇f(x)∥2]≤ϵ) for smooth non-convex functions -- under arbitrary client heterogeneity and compressed communication -- compared to the O(ϵ−2)\mathcal{O}(\epsilon^{-2})O(ϵ−2) complexity of most prior works. Our key algorithmic idea that enables achieving this improved complexity is based on the observation that the convergence in FL is hampered by two sources of high variance: (i) the global server aggregation step with multiple local updates, exacerbated by client heterogeneity, and (ii) the noise of the local client-level stochastic gradients. By modeling the server aggregation step as a generalized gradient-type update, we propose a variance-reducing momentum-based global update at the server, which when applied in conjunction with variance-reduced local updates at the clients, enables \texttt{FedGLOMO} to enjoy an improved convergence rate. Moreover, we derive our results under a novel and more realistic client-heterogeneity assumption which we verify empirically -- unlike prior assumptions that are hard to verify. Our experiments illustrate the intrinsic variance reduction effect of \texttt{FedGLOMO}, which implicitly suppresses client-drift in heterogeneous data distribution settings and promotes communication efficiency.

View on arXiv
Comments on this paper