ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.10025
  4. Cited By
Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex
  Optimization

Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization

22 August 2022
Zhize Li
Jian Li
ArXivPDFHTML

Papers citing "Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization"

50 / 51 papers shown
Title
SoteriaFL: A Unified Framework for Private Federated Learning with
  Communication Compression
SoteriaFL: A Unified Framework for Private Federated Learning with Communication Compression
Zhize Li
Haoyu Zhao
Boyue Li
Yuejie Chi
FedML
56
41
0
20 Jun 2022
3PC: Three Point Compressors for Communication-Efficient Distributed
  Training and a Better Theory for Lazy Aggregation
3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation
Peter Richtárik
Igor Sokolov
Ilyas Fatkhullin
Elnur Gasanov
Zhize Li
Eduard A. Gorbunov
55
32
0
02 Feb 2022
BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with
  Communication Compression
BEER: Fast O(1/T)O(1/T)O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression
Haoyu Zhao
Boyue Li
Zhize Li
Peter Richtárik
Yuejie Chi
65
51
0
31 Jan 2022
Faster Rates for Compressed Federated Learning with Client-Variance
  Reduction
Faster Rates for Compressed Federated Learning with Client-Variance Reduction
Haoyu Zhao
Konstantin Burlachenko
Zhize Li
Peter Richtárik
FedML
67
13
0
24 Dec 2021
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern
  Error Feedback
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback
Ilyas Fatkhullin
Igor Sokolov
Eduard A. Gorbunov
Zhize Li
Peter Richtárik
89
47
0
07 Oct 2021
DESTRESS: Computation-Optimal and Communication-Efficient Decentralized
  Nonconvex Finite-Sum Optimization
DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization
Boyue Li
Zhize Li
Yuejie Chi
44
22
0
04 Oct 2021
FedPAGE: A Fast Local Stochastic Gradient Method for
  Communication-Efficient Federated Learning
FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning
Haoyu Zhao
Zhize Li
Peter Richtárik
FedML
49
29
0
10 Aug 2021
CANITA: Faster Rates for Distributed Convex Optimization with
  Communication Compression
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression
Zhize Li
Peter Richtárik
55
30
0
20 Jul 2021
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error
  Feedback
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback
Peter Richtárik
Igor Sokolov
Ilyas Fatkhullin
57
146
0
09 Jun 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
63
14
0
21 Mar 2021
ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full
  Gradient Computation
ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full Gradient Computation
Zhize Li
Slavomír Hanzely
Peter Richtárik
36
31
0
02 Mar 2021
MARINA: Faster Non-Convex Distributed Learning with Compression
MARINA: Faster Non-Convex Distributed Learning with Compression
Eduard A. Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
67
110
0
15 Feb 2021
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for
  Nonconvex Optimization
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
69
128
0
25 Aug 2020
A Unified Analysis of Stochastic Gradient Methods for Nonconvex
  Federated Optimization
A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization
Zhize Li
Peter Richtárik
FedML
73
36
0
12 Jun 2020
Acceleration for Compressed Gradient Descent in Distributed and
  Federated Optimization
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization
Zhize Li
D. Kovalev
Xun Qian
Peter Richtárik
FedML
AI4CE
91
137
0
26 Feb 2020
A unified variance-reduced accelerated gradient method for convex
  optimization
A unified variance-reduced accelerated gradient method for convex optimization
Guanghui Lan
Zhize Li
Yi Zhou
46
61
0
29 May 2019
Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization
Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization
Rong Ge
Zhize Li
Weiyao Wang
Xiang Wang
46
34
0
01 May 2019
SSRGD: Simple Stochastic Recursive Gradient Descent for Escaping Saddle
  Points
SSRGD: Simple Stochastic Recursive Gradient Descent for Escaping Saddle Points
Zhize Li
44
38
0
19 Apr 2019
ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite
  Nonconvex Optimization
ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization
Nhan H. Pham
Lam M. Nguyen
Dzung Phan
Quoc Tran-Dinh
37
140
0
15 Feb 2019
On Nonconvex Optimization for Machine Learning: Gradients,
  Stochasticity, and Saddle Points
On Nonconvex Optimization for Machine Learning: Gradients, Stochasticity, and Saddle Points
Chi Jin
Praneeth Netrapalli
Rong Ge
Sham Kakade
Michael I. Jordan
72
61
0
13 Feb 2019
Sharp Analysis for Nonconvex SGD Escaping from Saddle Points
Sharp Analysis for Nonconvex SGD Escaping from Saddle Points
Cong Fang
Zhouchen Lin
Tong Zhang
69
104
0
01 Feb 2019
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are
  Better Without the Outer Loop
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
D. Kovalev
Samuel Horváth
Peter Richtárik
86
156
0
24 Jan 2019
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path
  Integrated Differential Estimator
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator
Cong Fang
C. J. Li
Zhouchen Lin
Tong Zhang
87
577
0
04 Jul 2018
Finding Local Minima via Stochastic Nested Variance Reduction
Finding Local Minima via Stochastic Nested Variance Reduction
Dongruo Zhou
Pan Xu
Quanquan Gu
66
23
0
22 Jun 2018
Stochastic Nested Variance Reduction for Nonconvex Optimization
Stochastic Nested Variance Reduction for Nonconvex Optimization
Dongruo Zhou
Pan Xu
Quanquan Gu
60
147
0
20 Jun 2018
Escaping Saddles with Stochastic Gradients
Escaping Saddles with Stochastic Gradients
Hadi Daneshmand
Jonas Köhler
Aurelien Lucchi
Thomas Hofmann
53
162
0
15 Mar 2018
A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex
  Optimization
A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization
Zhize Li
Jian Li
60
116
0
13 Feb 2018
Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient
  Descent
Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent
Chi Jin
Praneeth Netrapalli
Michael I. Jordan
ODL
61
261
0
28 Nov 2017
Neon2: Finding Local Minima via First-Order Oracles
Neon2: Finding Local Minima via First-Order Oracles
Zeyuan Allen-Zhu
Yuanzhi Li
57
130
0
17 Nov 2017
Random gradient extrapolation for distributed and stochastic
  optimization
Random gradient extrapolation for distributed and stochastic optimization
Guanghui Lan
Yi Zhou
42
52
0
15 Nov 2017
First-order Stochastic Algorithms for Escaping From Saddle Points in
  Almost Linear Time
First-order Stochastic Algorithms for Escaping From Saddle Points in Almost Linear Time
Yi Tian Xu
Rong Jin
Tianbao Yang
ODL
66
116
0
03 Nov 2017
Learning One-hidden-layer Neural Networks with Landscape Design
Learning One-hidden-layer Neural Networks with Landscape Design
Rong Ge
Jason D. Lee
Tengyu Ma
MLT
193
261
0
01 Nov 2017
Natasha 2: Faster Non-Convex Optimization Than SGD
Natasha 2: Faster Non-Convex Optimization Than SGD
Zeyuan Allen-Zhu
ODL
69
246
0
29 Aug 2017
Gradient Descent Can Take Exponential Time to Escape Saddle Points
Gradient Descent Can Take Exponential Time to Escape Saddle Points
S. Du
Chi Jin
Jason D. Lee
Michael I. Jordan
Barnabás Póczós
Aarti Singh
54
244
0
29 May 2017
How to Escape Saddle Points Efficiently
How to Escape Saddle Points Efficiently
Chi Jin
Rong Ge
Praneeth Netrapalli
Sham Kakade
Michael I. Jordan
ODL
217
836
0
02 Mar 2017
SARAH: A Novel Method for Machine Learning Problems Using Stochastic
  Recursive Gradient
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
ODL
157
603
0
01 Mar 2017
Finding Approximate Local Minima Faster than Gradient Descent
Finding Approximate Local Minima Faster than Gradient Descent
Naman Agarwal
Zeyuan Allen-Zhu
Brian Bullins
Elad Hazan
Tengyu Ma
78
83
0
03 Nov 2016
Less than a Single Pass: Stochastically Controlled Stochastic Gradient
  Method
Less than a Single Pass: Stochastically Controlled Stochastic Gradient Method
Lihua Lei
Michael I. Jordan
82
96
0
12 Sep 2016
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark Schmidt
271
1,218
0
16 Aug 2016
Tight Complexity Bounds for Optimizing Composite Objectives
Tight Complexity Bounds for Optimizing Composite Objectives
Blake E. Woodworth
Nathan Srebro
119
185
0
25 May 2016
Matrix Completion has No Spurious Local Minimum
Matrix Completion has No Spurious Local Minimum
Rong Ge
Jason D. Lee
Tengyu Ma
103
599
0
24 May 2016
Global Optimality of Local Search for Low Rank Matrix Recovery
Global Optimality of Local Search for Low Rank Matrix Recovery
Srinadh Bhojanapalli
Behnam Neyshabur
Nathan Srebro
ODL
106
388
0
23 May 2016
Stochastic Variance Reduction for Nonconvex Optimization
Stochastic Variance Reduction for Nonconvex Optimization
Sashank J. Reddi
Ahmed S. Hefny
S. Sra
Barnabás Póczós
Alex Smola
92
601
0
19 Mar 2016
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
96
580
0
18 Mar 2016
Efficient approaches for escaping higher order saddle points in
  non-convex optimization
Efficient approaches for escaping higher order saddle points in non-convex optimization
Anima Anandkumar
Rong Ge
28
143
0
18 Feb 2016
An optimal randomized incremental gradient method
An optimal randomized incremental gradient method
Guanghui Lan
Yi Zhou
127
220
0
08 Jul 2015
Escaping From Saddle Points --- Online Stochastic Gradient for Tensor
  Decomposition
Escaping From Saddle Points --- Online Stochastic Gradient for Tensor Decomposition
Rong Ge
Furong Huang
Chi Jin
Yang Yuan
135
1,058
0
06 Mar 2015
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
  Convex Composite Objectives
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
Aaron Defazio
Francis R. Bach
Simon Lacoste-Julien
ODL
131
1,823
0
01 Jul 2014
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
150
738
0
19 Mar 2014
Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic
  Programming
Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming
Saeed Ghadimi
Guanghui Lan
ODL
120
1,548
0
22 Sep 2013
12
Next