ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.11266
  4. Cited By
One Method to Rule Them All: Variance Reduction for Data, Parameters and
  Many New Methods

One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods

27 May 2019
Filip Hanzely
Peter Richtárik
ArXivPDFHTML

Papers citing "One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods"

9 / 9 papers shown
Title
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
Laurent Condat
Artavazd Maranjyan
Peter Richtárik
54
4
0
07 Mar 2024
GradSkip: Communication-Accelerated Local Gradient Methods with Better
  Computational Complexity
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity
Artavazd Maranjyan
M. Safaryan
Peter Richtárik
39
13
0
28 Oct 2022
Local SGD: Unified Theory and New Efficient Methods
Local SGD: Unified Theory and New Efficient Methods
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
FedML
37
109
0
03 Nov 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
42
0
0
26 Aug 2020
Acceleration for Compressed Gradient Descent in Distributed and
  Federated Optimization
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization
Zhize Li
D. Kovalev
Xun Qian
Peter Richtárik
FedML
AI4CE
29
135
0
26 Feb 2020
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
57
30
0
13 Feb 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a
  Surprising Application to Finite-Sum Problems
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely
D. Kovalev
Peter Richtárik
40
17
0
11 Feb 2020
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark Schmidt
139
1,205
0
16 Aug 2016
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
1