Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1906.12043
Cited By
Faster Distributed Deep Net Training: Computation and Communication Decoupled Stochastic Gradient Descent
28 June 2019
Shuheng Shen
Linli Xu
Jingchang Liu
Xianfeng Liang
Yifei Cheng
ODL
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Faster Distributed Deep Net Training: Computation and Communication Decoupled Stochastic Gradient Descent"
6 / 6 papers shown
Title
FedEx: Expediting Federated Learning over Heterogeneous Mobile Devices by Overlapping and Participant Selection
Jiaxiang Geng
Boyu Li
Xiaoqi Qin
Yixuan Li
Liang Li
Yanzhao Hou
Miao Pan
FedML
40
0
0
01 Jul 2024
FedCos: A Scene-adaptive Federated Optimization Enhancement for Performance Improvement
Hao Zhang
Tingting Wu
Siyao Cheng
Jie Liu
FedML
35
11
0
07 Apr 2022
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
30
12
0
06 Mar 2020
Intermittent Pulling with Local Compensation for Communication-Efficient Federated Learning
Yining Qi
Zhihao Qu
Song Guo
Xin Gao
Ruixuan Li
Baoliu Ye
FedML
18
8
0
22 Jan 2020
Variance Reduced Local SGD with Lower Communication Complexity
Xian-Feng Liang
Shuheng Shen
Jingchang Liu
Zhen Pan
Enhong Chen
Yifei Cheng
FedML
24
152
0
30 Dec 2019
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
177
683
0
07 Dec 2010
1