ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.19059
  4. Cited By
Escaping Saddle Points in Heterogeneous Federated Learning via
  Distributed SGD with Communication Compression

Escaping Saddle Points in Heterogeneous Federated Learning via Distributed SGD with Communication Compression

29 October 2023
Sijin Chen
Zhize Li
Yuejie Chi
    FedML
ArXivPDFHTML

Papers citing "Escaping Saddle Points in Heterogeneous Federated Learning via Distributed SGD with Communication Compression"

4 / 4 papers shown
Title
Second-Order Convergence in Private Stochastic Non-Convex Optimization
Second-Order Convergence in Private Stochastic Non-Convex Optimization
Youming Tao
Zuyuan Zhang
Dongxiao Yu
Xiuzhen Cheng
Falko Dressler
Di Wang
12
0
0
21 May 2025
Convergence Analysis of Asynchronous Federated Learning with Gradient Compression for Non-Convex Optimization
Convergence Analysis of Asynchronous Federated Learning with Gradient Compression for Non-Convex Optimization
Diying Yang
Yingwei Hou
Danyang Xiao
Weigang Wu
FedML
44
0
0
28 Apr 2025
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
Laurent Condat
Artavazd Maranjyan
Peter Richtárik
54
4
0
07 Mar 2024
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern
  Error Feedback
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback
Ilyas Fatkhullin
Igor Sokolov
Eduard A. Gorbunov
Zhize Li
Peter Richtárik
51
46
0
07 Oct 2021
1