Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2003.03009
Cited By
Communication optimization strategies for distributed deep neural network training: A survey
6 March 2020
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Communication optimization strategies for distributed deep neural network training: A survey"
6 / 6 papers shown
Title
Fair and Efficient Distributed Edge Learning with Hybrid Multipath TCP
Shiva Raj Pokhrel
Jinho D. Choi
A. Walid
30
6
0
03 Nov 2022
HPSGD: Hierarchical Parallel SGD With Stale Gradients Featuring
Yuhao Zhou
Qing Ye
Hailun Zhang
Jiancheng Lv
3DH
22
0
0
06 Sep 2020
DBS: Dynamic Batch Size For Distributed Deep Neural Network Training
Qing Ye
Yuhao Zhou
Mingjia Shi
Yanan Sun
Jiancheng Lv
16
11
0
23 Jul 2020
Enabling Compute-Communication Overlap in Distributed Deep Learning Training Platforms
Saeed Rashidi
Matthew Denton
Srinivas Sridharan
Sudarshan Srinivasan
Amoghavarsha Suresh
Jade Nie
T. Krishna
26
45
0
30 Jun 2020
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
284
2,890
0
15 Sep 2016
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
177
683
0
07 Dec 2010
1