Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1608.04636
Cited By
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
16 August 2016
Hamed Karimi
J. Nutini
Mark W. Schmidt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition"
50 / 167 papers shown
Title
Exact Pareto Optimal Search for Multi-Task Learning and Multi-Criteria Decision-Making
Debabrata Mahapatra
Vaibhav Rajan
25
2
0
02 Aug 2021
Improved Learning Rates for Stochastic Optimization: Two Theoretical Viewpoints
Shaojie Li
Yong Liu
12
13
0
19 Jul 2021
Faithful Edge Federated Learning: Scalability and Privacy
Meng Zhang
Ermin Wei
R. Berry
FedML
13
44
0
30 Jun 2021
Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent
Spencer Frei
Quanquan Gu
15
25
0
25 Jun 2021
Who Leads and Who Follows in Strategic Classification?
Tijana Zrnic
Eric Mazumdar
S. Shankar Sastry
Michael I. Jordan
18
50
0
23 Jun 2021
SG-PALM: a Fast Physically Interpretable Tensor Graphical Model
Yu Wang
Alfred Hero
34
4
0
26 May 2021
Stochastic gradient descent with noise of machine learning type. Part I: Discrete time analysis
Stephan Wojtowytsch
21
50
0
04 May 2021
Convergence Analysis and System Design for Federated Learning over Wireless Networks
Shuo Wan
Jiaxun Lu
Pingyi Fan
Yunfeng Shao
Chenghui Peng
Khaled B. Letaief
34
54
0
30 Apr 2021
Decentralized Federated Averaging
Tao Sun
Dongsheng Li
Bao Wang
FedML
38
206
0
23 Apr 2021
Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency
Yuyang Deng
M. Mahdavi
19
58
0
25 Feb 2021
Provable Super-Convergence with a Large Cyclical Learning Rate
Samet Oymak
28
12
0
22 Feb 2021
Convergence of stochastic gradient descent schemes for Lojasiewicz-landscapes
Steffen Dereich
Sebastian Kassing
26
27
0
16 Feb 2021
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
55
157
0
14 Feb 2021
Stochastic Gradient Langevin Dynamics with Variance Reduction
Zhishen Huang
Stephen Becker
13
7
0
12 Feb 2021
AEGD: Adaptive Gradient Descent with Energy
Hailiang Liu
Xuping Tian
ODL
25
11
0
10 Oct 2020
On Communication Compression for Distributed Optimization on Heterogeneous Data
Sebastian U. Stich
45
22
0
04 Sep 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
19
0
0
26 Aug 2020
AdaScale SGD: A User-Friendly Algorithm for Distributed Training
Tyler B. Johnson
Pulkit Agrawal
Haijie Gu
Carlos Guestrin
ODL
19
37
0
09 Jul 2020
Stochastic Hamiltonian Gradient Methods for Smooth Games
Nicolas Loizou
Hugo Berard
Alexia Jolicoeur-Martineau
Pascal Vincent
Simon Lacoste-Julien
Ioannis Mitliagkas
25
50
0
08 Jul 2020
DeltaGrad: Rapid retraining of machine learning models
Yinjun Wu
Edgar Dobriban
S. Davidson
MU
11
194
0
26 Jun 2020
SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and Interpolation
Robert Mansel Gower
Othmane Sebbouh
Nicolas Loizou
25
74
0
18 Jun 2020
A Non-Asymptotic Analysis for Stein Variational Gradient Descent
Anna Korba
Adil Salim
Michael Arbel
Giulia Luise
A. Gretton
13
76
0
17 Jun 2020
Linear Last-iterate Convergence in Constrained Saddle-point Optimization
Chen-Yu Wei
Chung-Wei Lee
Mengxiao Zhang
Haipeng Luo
8
11
0
16 Jun 2020
Walking in the Shadow: A New Perspective on Descent Directions for Constrained Minimization
Hassan Mortagy
Swati Gupta
S. Pokutta
16
7
0
15 Jun 2020
An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias
Lu Yu
Krishnakumar Balasubramanian
S. Volgushev
Murat A. Erdogdu
24
50
0
14 Jun 2020
SVGD as a kernelized Wasserstein gradient flow of the chi-squared divergence
Sinho Chewi
Thibaut Le Gouic
Chen Lu
Tyler Maunu
Philippe Rigollet
25
66
0
03 Jun 2020
Detached Error Feedback for Distributed SGD with Random Sparsification
An Xu
Heng-Chiao Huang
31
9
0
11 Apr 2020
Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast Convergence
Nicolas Loizou
Sharan Vaswani
I. Laradji
Simon Lacoste-Julien
11
181
0
24 Feb 2020
Global Convergence and Variance-Reduced Optimization for a Class of Nonconvex-Nonconcave Minimax Problems
Junchi Yang
Negar Kiyavash
Niao He
21
83
0
22 Feb 2020
A Unified Convergence Analysis for Shuffling-Type Gradient Methods
Lam M. Nguyen
Quoc Tran-Dinh
Dzung Phan
Phuong Ha Nguyen
Marten van Dijk
26
78
0
19 Feb 2020
Better Theory for SGD in the Nonconvex World
Ahmed Khaled
Peter Richtárik
11
178
0
09 Feb 2020
Complexity Guarantees for Polyak Steps with Momentum
Mathieu Barré
Adrien B. Taylor
Alexandre d’Aspremont
12
26
0
03 Feb 2020
Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem
Hesameddin Mohammadi
A. Zare
Mahdi Soltanolkotabi
M. Jovanović
30
121
0
26 Dec 2019
Fast Stochastic Ordinal Embedding with Variance Reduction and Adaptive Step Size
Ke Ma
Jinshan Zeng
Qianqian Xu
Xiaochun Cao
Wei Liu
Yuan Yao
20
3
0
01 Dec 2019
On the Convergence of Local Descent Methods in Federated Learning
Farzin Haddadpour
M. Mahdavi
FedML
16
265
0
31 Oct 2019
Linear-Quadratic Mean-Field Reinforcement Learning: Convergence of Policy Gradient Methods
René Carmona
Mathieu Laurière
Zongjun Tan
35
61
0
09 Oct 2019
Stochastic gradient descent for hybrid quantum-classical optimization
R. Sweke
Frederik Wilde
Johannes Jakob Meyer
Maria Schuld
Paul K. Fährmann
Barthélémy Meynard-Piganeau
Jens Eisert
17
236
0
02 Oct 2019
Differentially Private Meta-Learning
Jeffrey Li
M. Khodak
S. Caldas
Ameet Talwalkar
FedML
23
106
0
12 Sep 2019
On the Theory of Policy Gradient Methods: Optimality, Approximation, and Distribution Shift
Alekh Agarwal
Sham Kakade
J. Lee
G. Mahajan
11
315
0
01 Aug 2019
Adversarial Attack Generation Empowered by Min-Max Optimization
Jingkang Wang
Tianyun Zhang
Sijia Liu
Pin-Yu Chen
Jiacen Xu
M. Fardad
B. Li
AAML
23
35
0
09 Jun 2019
Global Optimality Guarantees For Policy Gradient Methods
Jalaj Bhandari
Daniel Russo
21
185
0
05 Jun 2019
Controlling Neural Networks via Energy Dissipation
Michael Möller
Thomas Möllenhoff
Daniel Cremers
25
17
0
05 Apr 2019
Provable Guarantees for Gradient-Based Meta-Learning
M. Khodak
Maria-Florina Balcan
Ameet Talwalkar
FedML
17
147
0
27 Feb 2019
ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization
Nhan H. Pham
Lam M. Nguyen
Dzung Phan
Quoc Tran-Dinh
11
139
0
15 Feb 2019
Solving Non-Convex Non-Concave Min-Max Games Under Polyak-Łojasiewicz Condition
Maziar Sanjabi
Meisam Razaviyayn
J. Lee
6
35
0
07 Dec 2018
Fast and Faster Convergence of SGD for Over-Parameterized Models and an Accelerated Perceptron
Sharan Vaswani
Francis R. Bach
Mark W. Schmidt
28
296
0
16 Oct 2018
Exponential Convergence Time of Gradient Descent for One-Dimensional Deep Linear Neural Networks
Ohad Shamir
22
45
0
23 Sep 2018
SEGA: Variance Reduction via Gradient Sketching
Filip Hanzely
Konstantin Mishchenko
Peter Richtárik
23
71
0
09 Sep 2018
Convergence of Cubic Regularization for Nonconvex Optimization under KL Property
Yi Zhou
Zhe Wang
Yingbin Liang
19
23
0
22 Aug 2018
Stochastic Nested Variance Reduction for Nonconvex Optimization
Dongruo Zhou
Pan Xu
Quanquan Gu
25
146
0
20 Jun 2018
Previous
1
2
3
4
Next