Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.09437
Cited By
99% of Distributed Optimization is a Waste of Time: The Issue and How to Fix it
27 January 2019
Konstantin Mishchenko
Filip Hanzely
Peter Richtárik
Re-assign community
ArXiv
PDF
HTML
Papers citing
"99% of Distributed Optimization is a Waste of Time: The Issue and How to Fix it"
6 / 6 papers shown
Title
Fast and Faster Convergence of SGD for Over-Parameterized Models and an Accelerated Perceptron
Sharan Vaswani
Francis R. Bach
Mark Schmidt
75
298
0
16 Oct 2018
Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
Chengyue Wu
Song Han
Huizi Mao
Yu Wang
W. Dally
120
1,407
0
05 Dec 2017
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
W. Wen
Cong Xu
Feng Yan
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
137
987
0
22 May 2017
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
Aaron Defazio
Francis R. Bach
Simon Lacoste-Julien
ODL
131
1,823
0
01 Jul 2014
Minimizing Finite Sums with the Stochastic Average Gradient
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
295
1,246
0
10 Sep 2013
Parallel Coordinate Descent Methods for Big Data Optimization
Peter Richtárik
Martin Takáč
111
487
0
04 Dec 2012
1