Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2105.07320
Cited By
LocalNewton: Reducing Communication Bottleneck for Distributed Learning
16 May 2021
Vipul Gupta
Avishek Ghosh
Michal Derezinski
Rajiv Khanna
Kannan Ramchandran
Michael W. Mahoney
Re-assign community
ArXiv
PDF
HTML
Papers citing
"LocalNewton: Reducing Communication Bottleneck for Distributed Learning"
8 / 8 papers shown
Title
ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning
Z. Yao
A. Gholami
Sheng Shen
Mustafa Mustafa
Kurt Keutzer
Michael W. Mahoney
ODL
73
280
0
01 Jun 2020
Stochastic Weight Averaging in Parallel: Large-Batch Training that Generalizes Well
Vipul Gupta
S. Serrano
D. DeCoste
MoMe
56
58
0
07 Jan 2020
Serverless Computing: One Step Forward, Two Steps Back
J. M. Hellerstein
Jose M. Faleiro
Joseph E. Gonzalez
Johann Schleier-Smith
Vikram Sreekanti
Alexey Tumanov
Chenggang Wu
27
390
0
10 Dec 2018
Local SGD Converges Fast and Communicates Little
Sebastian U. Stich
FedML
152
1,056
0
24 May 2018
Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
Chengyue Wu
Song Han
Huizi Mao
Yu Wang
W. Dally
102
1,399
0
05 Dec 2017
CoCoA: A General Framework for Communication-Efficient Distributed Optimization
Virginia Smith
Simone Forte
Chenxin Ma
Martin Takáč
Michael I. Jordan
Martin Jaggi
49
272
0
07 Nov 2016
Federated Learning: Strategies for Improving Communication Efficiency
Jakub Konecný
H. B. McMahan
Felix X. Yu
Peter Richtárik
A. Suresh
Dave Bacon
FedML
269
4,620
0
18 Oct 2016
On Variance Reduction in Stochastic Gradient Descent and its Asynchronous Variants
Sashank J. Reddi
Ahmed S. Hefny
S. Sra
Barnabás Póczós
Alex Smola
98
195
0
23 Jun 2015
1