Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.11266
Cited By
One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods
27 May 2019
Filip Hanzely
Peter Richtárik
Re-assign community
ArXiv
PDF
HTML
Papers citing
"One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods"
9 / 9 papers shown
Title
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
Laurent Condat
Artavazd Maranjyan
Peter Richtárik
54
4
0
07 Mar 2024
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity
Artavazd Maranjyan
M. Safaryan
Peter Richtárik
39
13
0
28 Oct 2022
Local SGD: Unified Theory and New Efficient Methods
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
FedML
37
109
0
03 Nov 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
42
0
0
26 Aug 2020
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization
Zhize Li
D. Kovalev
Xun Qian
Peter Richtárik
FedML
AI4CE
29
135
0
26 Feb 2020
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
57
30
0
13 Feb 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely
D. Kovalev
Peter Richtárik
40
17
0
11 Feb 2020
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark Schmidt
139
1,205
0
16 Aug 2016
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
1