ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.01848
  4. Cited By
Convergence of ease-controlled Random Reshuffling gradient Algorithms
  under Lipschitz smoothness

Convergence of ease-controlled Random Reshuffling gradient Algorithms under Lipschitz smoothness

4 December 2022
R. Seccia
Corrado Coppola
G. Liuzzi
L. Palagi
ArXivPDFHTML

Papers citing "Convergence of ease-controlled Random Reshuffling gradient Algorithms under Lipschitz smoothness"

24 / 24 papers shown
Title
Federated Optimization Algorithms with Random Reshuffling and Gradient
  Compression
Federated Optimization Algorithms with Random Reshuffling and Gradient Compression
Abdurakhmon Sadiev
Grigory Malinovsky
Eduard A. Gorbunov
Igor Sokolov
Ahmed Khaled
Konstantin Burlachenko
Peter Richtárik
FedML
51
21
0
14 Jun 2022
Server-Side Stepsizes and Sampling Without Replacement Provably Help in
  Federated Optimization
Server-Side Stepsizes and Sampling Without Replacement Provably Help in Federated Optimization
Grigory Malinovsky
Konstantin Mishchenko
Peter Richtárik
FedML
48
24
0
26 Jan 2022
Convergence of Random Reshuffling Under The Kurdyka-Łojasiewicz
  Inequality
Convergence of Random Reshuffling Under The Kurdyka-Łojasiewicz Inequality
Xiao Li
Andre Milzarek
Junwen Qiu
39
20
0
10 Oct 2021
Random Reshuffling with Variance Reduction: New Analysis and Better
  Rates
Random Reshuffling with Variance Reduction: New Analysis and Better Rates
Grigory Malinovsky
Alibek Sailanbayev
Peter Richtárik
41
20
0
19 Apr 2021
Proximal and Federated Random Reshuffling
Proximal and Federated Random Reshuffling
Konstantin Mishchenko
Ahmed Khaled
Peter Richtárik
FedML
54
31
0
12 Feb 2021
SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and
  Interpolation
SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and Interpolation
Robert Mansel Gower
Othmane Sebbouh
Nicolas Loizou
81
75
0
18 Jun 2020
SGD with shuffling: optimal rates without component convexity and large
  epoch requirements
SGD with shuffling: optimal rates without component convexity and large epoch requirements
Kwangjun Ahn
Chulhee Yun
S. Sra
49
66
0
12 Jun 2020
Random Reshuffling: Simple Analysis with Vast Improvements
Random Reshuffling: Simple Analysis with Vast Improvements
Konstantin Mishchenko
Ahmed Khaled
Peter Richtárik
62
131
0
10 Jun 2020
A Unified Convergence Analysis for Shuffling-Type Gradient Methods
A Unified Convergence Analysis for Shuffling-Type Gradient Methods
Lam M. Nguyen
Quoc Tran-Dinh
Dzung Phan
Phuong Ha Nguyen
Marten van Dijk
83
78
0
19 Feb 2020
Better Theory for SGD in the Nonconvex World
Better Theory for SGD in the Nonconvex World
Ahmed Khaled
Peter Richtárik
64
183
0
09 Feb 2020
Stochastic Gradient Descent for Nonconvex Learning without Bounded
  Gradient Assumptions
Stochastic Gradient Descent for Nonconvex Learning without Bounded Gradient Assumptions
Yunwen Lei
Ting Hu
Guiying Li
K. Tang
MLT
77
118
0
03 Feb 2019
Stochastic (Approximate) Proximal Point Methods: Convergence,
  Optimality, and Adaptivity
Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity
Hilal Asi
John C. Duchi
120
125
0
12 Oct 2018
AdaGrad stepsizes: Sharp convergence over nonconvex landscapes
AdaGrad stepsizes: Sharp convergence over nonconvex landscapes
Rachel A. Ward
Xiaoxia Wu
Léon Bottou
ODL
59
364
0
05 Jun 2018
LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed
  Learning
LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning
Tianyi Chen
G. Giannakis
Tao Sun
W. Yin
53
298
0
25 May 2018
Adaptive Sampling Strategies for Stochastic Optimization
Adaptive Sampling Strategies for Stochastic Optimization
Raghu Bollapragada
R. Byrd
J. Nocedal
44
116
0
30 Oct 2017
Stochastic Methods for Composite and Weakly Convex Optimization Problems
Stochastic Methods for Composite and Weakly Convex Optimization Problems
John C. Duchi
Feng Ruan
32
127
0
24 Mar 2017
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
419
2,936
0
15 Sep 2016
Optimization Methods for Large-Scale Machine Learning
Optimization Methods for Large-Scale Machine Learning
Léon Bottou
Frank E. Curtis
J. Nocedal
233
3,206
0
15 Jun 2016
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.7K
150,006
0
22 Dec 2014
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
  Convex Composite Objectives
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
Aaron Defazio
Francis R. Bach
Simon Lacoste-Julien
ODL
131
1,823
0
01 Jul 2014
Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic
  Programming
Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming
Saeed Ghadimi
Guanghui Lan
ODL
120
1,548
0
22 Sep 2013
Minimizing Finite Sums with the Stochastic Average Gradient
Minimizing Finite Sums with the Stochastic Average Gradient
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
316
1,245
0
10 Sep 2013
Practical recommendations for gradient-based training of deep
  architectures
Practical recommendations for gradient-based training of deep architectures
Yoshua Bengio
3DH
ODL
189
2,197
0
24 Jun 2012
Hybrid Deterministic-Stochastic Methods for Data Fitting
Hybrid Deterministic-Stochastic Methods for Data Fitting
M. Friedlander
Mark Schmidt
196
387
0
13 Apr 2011
1