ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.04346
  4. Cited By
On the Computation and Communication Complexity of Parallel SGD with
  Dynamic Batch Sizes for Stochastic Non-Convex Optimization

On the Computation and Communication Complexity of Parallel SGD with Dynamic Batch Sizes for Stochastic Non-Convex Optimization

10 May 2019
Hao Yu
Rong Jin
ArXiv (abs)PDFHTML

Papers citing "On the Computation and Communication Complexity of Parallel SGD with Dynamic Batch Sizes for Stochastic Non-Convex Optimization"

30 / 30 papers shown
Title
HASFL: Heterogeneity-aware Split Federated Learning over Edge Computing Systems
Zheng Lin
Zhe Chen
Xianhao Chen
Wei Ni
Yue Gao
FedML
32
0
0
10 Jun 2025
Communication-Efficient Distributed Deep Learning via Federated Dynamic
  Averaging
Communication-Efficient Distributed Deep Learning via Federated Dynamic Averaging
Michail Theologitis
Georgios Frangias
Georgios Anestis
V. Samoladas
Antonios Deligiannakis
FedML
103
0
0
31 May 2024
Communication-Efficient Large-Scale Distributed Deep Learning: A
  Comprehensive Survey
Communication-Efficient Large-Scale Distributed Deep Learning: A Comprehensive Survey
Feng Liang
Zhen Zhang
Haifeng Lu
Victor C. M. Leung
Yanyi Guo
Xiping Hu
GNN
103
8
0
09 Apr 2024
AdaptSFL: Adaptive Split Federated Learning in Resource-constrained Edge Networks
AdaptSFL: Adaptive Split Federated Learning in Resource-constrained Edge Networks
Zhengyi Lin
Guanqiao Qu
Wei Wei
Xianhao Chen
Kin K. Leung
128
51
0
19 Mar 2024
FedBIAD: Communication-Efficient and Accuracy-Guaranteed Federated
  Learning with Bayesian Inference-Based Adaptive Dropout
FedBIAD: Communication-Efficient and Accuracy-Guaranteed Federated Learning with Bayesian Inference-Based Adaptive Dropout
Jingjing Xue
Min Liu
Sheng Sun
Yuwei Wang
Hui Jiang
Xue Jiang
86
7
0
14 Jul 2023
Taming Resource Heterogeneity In Distributed ML Training With Dynamic
  Batching
Taming Resource Heterogeneity In Distributed ML Training With Dynamic Batching
S. Tyagi
Prateek Sharma
86
22
0
20 May 2023
Scavenger: A Cloud Service for Optimizing Cost and Performance of ML
  Training
Scavenger: A Cloud Service for Optimizing Cost and Performance of ML Training
S. Tyagi
Prateek Sharma
68
5
0
12 Mar 2023
Regularized Gradient Descent Ascent for Two-Player Zero-Sum Markov Games
Regularized Gradient Descent Ascent for Two-Player Zero-Sum Markov Games
Sihan Zeng
Thinh T. Doan
Justin Romberg
146
22
0
27 May 2022
GBA: A Tuning-free Approach to Switch between Synchronous and
  Asynchronous Training for Recommendation Model
GBA: A Tuning-free Approach to Switch between Synchronous and Asynchronous Training for Recommendation Model
Wenbo Su
Yuanxing Zhang
Yufeng Cai
Kaixu Ren
Pengjie Wang
...
Jing Chen
Hongbo Deng
Jian Xu
Lin Qu
Bo Zheng
64
5
0
23 May 2022
Sharp Bounds for Federated Averaging (Local SGD) and Continuous
  Perspective
Sharp Bounds for Federated Averaging (Local SGD) and Continuous Perspective
Margalit Glasgow
Honglin Yuan
Tengyu Ma
FedML
83
45
0
05 Nov 2021
What Do We Mean by Generalization in Federated Learning?
What Do We Mean by Generalization in Federated Learning?
Honglin Yuan
Warren Morningstar
Lin Ning
K. Singhal
OODFedML
123
76
0
27 Oct 2021
Federated Submodel Optimization for Hot and Cold Data Features
Federated Submodel Optimization for Hot and Cold Data Features
Yucheng Ding
Chaoyue Niu
Fan Wu
Shaojie Tang
Chengfei Lv
Yanghe Feng
Guihai Chen
FedML
48
6
0
16 Sep 2021
STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal
  Sample and Communication Complexities for Federated Learning
STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning
Prashant Khanduri
Pranay Sharma
Haibo Yang
Min-Fong Hong
Jia Liu
K. Rajawat
P. Varshney
FedML
59
63
0
19 Jun 2021
Oscars: Adaptive Semi-Synchronous Parallel Model for Distributed Deep
  Learning with Global View
Oscars: Adaptive Semi-Synchronous Parallel Model for Distributed Deep Learning with Global View
Sheng-Jun Huang
40
0
0
17 Feb 2021
To Talk or to Work: Flexible Communication Compression for Energy
  Efficient Federated Learning over Heterogeneous Mobile Edge Devices
To Talk or to Work: Flexible Communication Compression for Energy Efficient Federated Learning over Heterogeneous Mobile Edge Devices
Liang Li
Dian Shi
Ronghui Hou
Hui Li
Miao Pan
Zhu Han
FedML
70
152
0
22 Dec 2020
Federated Composite Optimization
Federated Composite Optimization
Honglin Yuan
Manzil Zaheer
Sashank J. Reddi
FedML
87
61
0
17 Nov 2020
Hogwild! over Distributed Local Data Sets with Linearly Increasing
  Mini-Batch Sizes
Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes
Marten van Dijk
Nhuong V. Nguyen
Toan N. Nguyen
Lam M. Nguyen
Quoc Tran-Dinh
Phuong Ha Nguyen
FedML
116
10
0
27 Oct 2020
Federated Accelerated Stochastic Gradient Descent
Federated Accelerated Stochastic Gradient Descent
Honglin Yuan
Tengyu Ma
FedML
102
180
0
16 Jun 2020
STL-SGD: Speeding Up Local SGD with Stagewise Communication Period
STL-SGD: Speeding Up Local SGD with Stagewise Communication Period
Shuheng Shen
Yifei Cheng
Jingchang Liu
Linli Xu
LRM
70
7
0
11 Jun 2020
Stopping Criteria for, and Strong Convergence of, Stochastic Gradient
  Descent on Bottou-Curtis-Nocedal Functions
Stopping Criteria for, and Strong Convergence of, Stochastic Gradient Descent on Bottou-Curtis-Nocedal Functions
V. Patel
81
23
0
01 Apr 2020
Machine Learning on Volatile Instances
Machine Learning on Volatile Instances
Xiaoxi Zhang
Jianyu Wang
Gauri Joshi
Carlee Joe-Wong
56
25
0
12 Mar 2020
Communication-Efficient Distributed Deep Learning: A Comprehensive
  Survey
Communication-Efficient Distributed Deep Learning: A Comprehensive Survey
Zhenheng Tang
Shaoshuai Shi
Wei Wang
Yue Liu
Xiaowen Chu
83
49
0
10 Mar 2020
Stagewise Enlargement of Batch Size for SGD-based Learning
Stagewise Enlargement of Batch Size for SGD-based Learning
Shen-Yi Zhao
Yin-Peng Xie
Wu-Jun Li
43
1
0
26 Feb 2020
LASG: Lazily Aggregated Stochastic Gradients for Communication-Efficient
  Distributed Learning
LASG: Lazily Aggregated Stochastic Gradients for Communication-Efficient Distributed Learning
Tianyi Chen
Yuejiao Sun
W. Yin
FedML
47
14
0
26 Feb 2020
Distributed Optimization over Block-Cyclic Data
Distributed Optimization over Block-Cyclic Data
Yucheng Ding
Chaoyue Niu
Yikai Yan
Zhenzhe Zheng
Fan Wu
Guihai Chen
Shaojie Tang
Rongfei Jia
FedML
106
16
0
18 Feb 2020
Distributed Non-Convex Optimization with Sublinear Speedup under
  Intermittent Client Availability
Distributed Non-Convex Optimization with Sublinear Speedup under Intermittent Client Availability
Yikai Yan
Chaoyue Niu
Yucheng Ding
Zhenzhe Zheng
Fan Wu
Guihai Chen
Shaojie Tang
Zhihua Wu
FedML
192
38
0
18 Feb 2020
Parallel Restarted SPIDER -- Communication Efficient Distributed
  Nonconvex Optimization with Optimal Computation Complexity
Parallel Restarted SPIDER -- Communication Efficient Distributed Nonconvex Optimization with Optimal Computation Complexity
Pranay Sharma
Swatantra Kafle
Prashant Khanduri
Saikiran Bulusu
K. Rajawat
P. Varshney
FedML
123
17
0
12 Dec 2019
Local SGD with Periodic Averaging: Tighter Analysis and Adaptive
  Synchronization
Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization
Farzin Haddadpour
Mohammad Mahdi Kamani
M. Mahdavi
V. Cadambe
FedML
87
202
0
30 Oct 2019
Communication-Efficient Distributed Learning via Lazily Aggregated
  Quantized Gradients
Communication-Efficient Distributed Learning via Lazily Aggregated Quantized Gradients
Jun Sun
Tianyi Chen
G. Giannakis
Zaiyue Yang
88
95
0
17 Sep 2019
Communication-Censored Distributed Stochastic Gradient Descent
Communication-Censored Distributed Stochastic Gradient Descent
Weiyu Li
Tianyi Chen
Liping Li
Zhaoxian Wu
Qing Ling
55
17
0
09 Sep 2019
1