ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.10726
  4. Cited By
Statistically Preconditioned Accelerated Gradient Method for Distributed
  Optimization

Statistically Preconditioned Accelerated Gradient Method for Distributed Optimization

25 February 2020
Hadrien Hendrikx
Lin Xiao
Sébastien Bubeck
Francis R. Bach
Laurent Massoulie
ArXiv (abs)PDFHTML

Papers citing "Statistically Preconditioned Accelerated Gradient Method for Distributed Optimization"

32 / 32 papers shown
Title
Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis
Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis
Ruichen Luo
Sebastian U Stich
Samuel Horváth
Martin Takáč
135
0
0
08 Jan 2025
Accelerated Methods with Compressed Communications for Distributed
  Optimization Problems under Data Similarity
Accelerated Methods with Compressed Communications for Distributed Optimization Problems under Data Similarity
Dmitry Bylinkin
Aleksandr Beznosikov
143
1
0
21 Dec 2024
Accelerated Stochastic ExtraGradient: Mixing Hessian and Gradient
  Similarity to Reduce Communication in Distributed and Federated Learning
Accelerated Stochastic ExtraGradient: Mixing Hessian and Gradient Similarity to Reduce Communication in Distributed and Federated Learning
Dmitry Bylinkin
Kirill Degtyarev
Aleksandr Beznosikov
FedML
68
0
0
22 Sep 2024
Stabilized Proximal-Point Methods for Federated Optimization
Stabilized Proximal-Point Methods for Federated Optimization
Xiaowen Jiang
Anton Rodomanov
Sebastian U. Stich
FedML
78
2
0
09 Jul 2024
Local Methods with Adaptivity via Scaling
Local Methods with Adaptivity via Scaling
Saveliy Chezhegov
Sergey Skorik
Nikolas Khachaturov
Danil Shalagin
A. Avetisyan
Aleksandr Beznosikov
Martin Takáč
Yaroslav Kholodov
Alexander Gasnikov
107
3
0
02 Jun 2024
SPAM: Stochastic Proximal Point Method with Momentum Variance Reduction
  for Non-convex Cross-Device Federated Learning
SPAM: Stochastic Proximal Point Method with Momentum Variance Reduction for Non-convex Cross-Device Federated Learning
Avetik G. Karagulyan
Egor Shulgin
Abdurakhmon Sadiev
Peter Richtárik
FedML
83
3
0
30 May 2024
Near-Optimal Distributed Minimax Optimization under the Second-Order
  Similarity
Near-Optimal Distributed Minimax Optimization under the Second-Order Similarity
Qihao Zhou
Haishan Ye
Luo Luo
74
0
0
25 May 2024
Distributed Event-Based Learning via ADMM
Distributed Event-Based Learning via ADMM
Güner Dilsad Er
Sebastian Trimpe
Michael Muehlebach
FedML
103
2
0
17 May 2024
Federated Optimization with Doubly Regularized Drift Correction
Federated Optimization with Doubly Regularized Drift Correction
Xiaowen Jiang
Anton Rodomanov
Sebastian U. Stich
FedML
70
7
0
12 Apr 2024
Non-Convex Stochastic Composite Optimization with Polyak Momentum
Non-Convex Stochastic Composite Optimization with Polyak Momentum
Yuan Gao
Anton Rodomanov
Sebastian U. Stich
73
8
0
05 Mar 2024
Optimal Data Splitting in Distributed Optimization for Machine Learning
Optimal Data Splitting in Distributed Optimization for Machine Learning
Daniil Medyakov
Gleb Molodtsov
Aleksandr Beznosikov
Alexander Gasnikov
75
3
0
15 Jan 2024
Fast Sampling and Inference via Preconditioned Langevin Dynamics
Fast Sampling and Inference via Preconditioned Langevin Dynamics
Riddhiman Bhattacharya
Tiefeng Jiang
56
1
0
11 Oct 2023
Stochastic Distributed Optimization under Average Second-order
  Similarity: Algorithms and Analysis
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis
Dachao Lin
Yuze Han
Haishan Ye
Zhihua Zhang
72
12
0
15 Apr 2023
Similarity, Compression and Local Steps: Three Pillars of Efficient
  Communications for Distributed Variational Inequalities
Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities
Aleksandr Beznosikov
Martin Takáč
Alexander Gasnikov
76
11
0
15 Feb 2023
Two Losses Are Better Than One: Faster Optimization Using a Cheaper
  Proxy
Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy
Blake E. Woodworth
Konstantin Mishchenko
Francis R. Bach
82
6
0
07 Feb 2023
Faster federated optimization under second-order similarity
Faster federated optimization under second-order similarity
Ahmed Khaled
Chi Jin
FedML
93
19
0
06 Sep 2022
Compression and Data Similarity: Combination of Two Techniques for
  Communication-Efficient Solving of Distributed Variational Inequalities
Compression and Data Similarity: Combination of Two Techniques for Communication-Efficient Solving of Distributed Variational Inequalities
Aleksandr Beznosikov
Alexander Gasnikov
60
10
0
19 Jun 2022
Optimal Gradient Sliding and its Application to Distributed Optimization
  Under Similarity
Optimal Gradient Sliding and its Application to Distributed Optimization Under Similarity
D. Kovalev
Aleksandr Beznosikov
Ekaterina Borodich
Alexander Gasnikov
G. Scutari
70
13
0
30 May 2022
Communication-Efficient Distributed Learning via Sparse and Adaptive
  Stochastic Gradient
Communication-Efficient Distributed Learning via Sparse and Adaptive Stochastic Gradient
Xiaoge Deng
Dongsheng Li
Tao Sun
Xicheng Lu
FedML
45
0
0
08 Dec 2021
Improving Dynamic Regret in Distributed Online Mirror Descent Using
  Primal and Dual Information
Improving Dynamic Regret in Distributed Online Mirror Descent Using Primal and Dual Information
Nima Eshraghi
Ben Liang
90
9
0
07 Dec 2021
Random-reshuffled SARAH does not need a full gradient computations
Random-reshuffled SARAH does not need a full gradient computations
Aleksandr Beznosikov
Martin Takáč
65
8
0
26 Nov 2021
Acceleration in Distributed Optimization under Similarity
Acceleration in Distributed Optimization under Similarity
Helena Lofstrom
G. Scutari
Tianyue Cao
Alexander Gasnikov
70
28
0
24 Oct 2021
Distributed Saddle-Point Problems Under Similarity
Distributed Saddle-Point Problems Under Similarity
Aleksandr Beznosikov
G. Scutari
Alexander Rogozin
Alexander Gasnikov
82
15
0
22 Jul 2021
Robust Distributed Optimization With Randomly Corrupted Gradients
Robust Distributed Optimization With Randomly Corrupted Gradients
Berkay Turan
César A. Uribe
Hoi-To Wai
M. Alizadeh
63
17
0
28 Jun 2021
Decentralized Local Stochastic Extra-Gradient for Variational
  Inequalities
Decentralized Local Stochastic Extra-Gradient for Variational Inequalities
Aleksandr Beznosikov
Pavel Dvurechensky
Anastasia Koloskova
V. Samokhin
Sebastian U. Stich
Alexander Gasnikov
60
43
0
15 Jun 2021
Communication-Efficient Distributed Optimization with Quantized
  Preconditioners
Communication-Efficient Distributed Optimization with Quantized Preconditioners
Foivos Alimisis
Peter Davies
Dan Alistarh
56
16
0
14 Feb 2021
Newton Method over Networks is Fast up to the Statistical Precision
Newton Method over Networks is Fast up to the Statistical Precision
Amir Daneshmand
G. Scutari
Pavel Dvurechensky
Alexander Gasnikov
60
22
0
12 Feb 2021
Concentration of Non-Isotropic Random Tensors with Applications to Learning and Empirical Risk Minimization
Concentration of Non-Isotropic Random Tensors with Applications to Learning and Empirical Risk Minimization
Mathieu Even
Laurent Massoulié
93
14
0
04 Feb 2021
The Min-Max Complexity of Distributed Stochastic Convex Optimization
  with Intermittent Communication
The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication
Blake E. Woodworth
Brian Bullins
Ohad Shamir
Nathan Srebro
61
49
0
02 Feb 2021
First-Order Methods for Convex Optimization
First-Order Methods for Convex Optimization
Pavel Dvurechensky
Mathias Staudigl
Shimrit Shtern
ODL
81
26
0
04 Jan 2021
Stochastic Saddle-Point Optimization for Wasserstein Barycenters
Stochastic Saddle-Point Optimization for Wasserstein Barycenters
D. Tiapkin
Alexander Gasnikov
Pavel Dvurechensky
42
7
0
11 Jun 2020
On Convergence of Distributed Approximate Newton Methods: Globalization,
  Sharper Bounds and Beyond
On Convergence of Distributed Approximate Newton Methods: Globalization, Sharper Bounds and Beyond
Xiao-Tong Yuan
Ping Li
129
32
0
06 Aug 2019
1