ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.02582
  4. Cited By
Local SGD With a Communication Overhead Depending Only on the Number of
  Workers

Local SGD With a Communication Overhead Depending Only on the Number of Workers

3 June 2020
Artin Spiridonoff
Alexander Olshevsky
I. Paschalidis
    FedML
ArXivPDFHTML

Papers citing "Local SGD With a Communication Overhead Depending Only on the Number of Workers"

8 / 8 papers shown
Title
EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models
EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models
Jialiang Cheng
Ning Gao
Yun Yue
Zhiling Ye
Jiadi Jiang
Jian Sha
OffRL
77
0
0
10 Dec 2024
Efficient Federated Learning via Local Adaptive Amended Optimizer with
  Linear Speedup
Efficient Federated Learning via Local Adaptive Amended Optimizer with Linear Speedup
Yan Sun
Li Shen
Hao Sun
Liang Ding
Dacheng Tao
FedML
24
17
0
30 Jul 2023
Local SGD Accelerates Convergence by Exploiting Second Order Information
  of the Loss Function
Local SGD Accelerates Convergence by Exploiting Second Order Information of the Loss Function
Linxuan Pan
Shenghui Song
FedML
25
2
0
24 May 2023
Federated Temporal Difference Learning with Linear Function
  Approximation under Environmental Heterogeneity
Federated Temporal Difference Learning with Linear Function Approximation under Environmental Heterogeneity
Han Wang
A. Mitra
Hamed Hassani
George J. Pappas
James Anderson
FedML
29
21
0
04 Feb 2023
On the Stability Analysis of Open Federated Learning Systems
On the Stability Analysis of Open Federated Learning Systems
Youbang Sun
H. Fernando
Tianyi Chen
Shahin Shahrampour
FedML
29
1
0
25 Sep 2022
Linear Convergence in Federated Learning: Tackling Client Heterogeneity
  and Sparse Gradients
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
55
157
0
14 Feb 2021
Hogwild! over Distributed Local Data Sets with Linearly Increasing
  Mini-Batch Sizes
Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes
Marten van Dijk
Nhuong V. Nguyen
Toan N. Nguyen
Lam M. Nguyen
Quoc Tran-Dinh
Phuong Ha Nguyen
FedML
34
10
0
27 Oct 2020
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
177
683
0
07 Dec 2010
1