ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.04636
  4. Cited By
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition

Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition

16 August 2016
Hamed Karimi
J. Nutini
Mark W. Schmidt
ArXivPDFHTML

Papers citing "Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition"

50 / 167 papers shown
Title
Laplacian-based Semi-Supervised Learning in Multilayer Hypergraphs by
  Coordinate Descent
Laplacian-based Semi-Supervised Learning in Multilayer Hypergraphs by Coordinate Descent
Sara Venturini
Andrea Cristofari
Francesco Rinaldi
Francesco Tudisco
20
2
0
28 Jan 2023
Understanding Incremental Learning of Gradient Descent: A Fine-grained
  Analysis of Matrix Sensing
Understanding Incremental Learning of Gradient Descent: A Fine-grained Analysis of Matrix Sensing
Jikai Jin
Zhiyuan Li
Kaifeng Lyu
S. Du
Jason D. Lee
MLT
46
34
0
27 Jan 2023
On the Convergence of the Gradient Descent Method with Stochastic Fixed-point Rounding Errors under the Polyak-Lojasiewicz Inequality
On the Convergence of the Gradient Descent Method with Stochastic Fixed-point Rounding Errors under the Polyak-Lojasiewicz Inequality
Lu Xia
M. Hochstenbach
Stefano Massei
27
2
0
23 Jan 2023
Convergence beyond the over-parameterized regime using Rayleigh
  quotients
Convergence beyond the over-parameterized regime using Rayleigh quotients
David A. R. Robin
Kevin Scaman
Marc Lelarge
17
3
0
19 Jan 2023
Sharper Analysis for Minibatch Stochastic Proximal Point Methods:
  Stability, Smoothness, and Deviation
Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation
Xiao-Tong Yuan
P. Li
32
2
0
09 Jan 2023
Restarts subject to approximate sharpness: A parameter-free and optimal
  scheme for first-order methods
Restarts subject to approximate sharpness: A parameter-free and optimal scheme for first-order methods
Ben Adcock
Matthew J. Colbrook
Maksym Neyra-Nesterenko
27
2
0
05 Jan 2023
Stochastic Variable Metric Proximal Gradient with variance reduction for
  non-convex composite optimization
Stochastic Variable Metric Proximal Gradient with variance reduction for non-convex composite optimization
G. Fort
Eric Moulines
44
6
0
02 Jan 2023
Gradient Descent-Type Methods: Background and Simple Unified Convergence
  Analysis
Gradient Descent-Type Methods: Background and Simple Unified Convergence Analysis
Quoc Tran-Dinh
Marten van Dijk
26
0
0
19 Dec 2022
Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast
  Evasion of Non-Degenerate Saddle Points
Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast Evasion of Non-Degenerate Saddle Points
Mayank Baranwal
Param Budhraja
V. Raj
A. Hota
23
2
0
07 Dec 2022
Regularized Rényi divergence minimization through Bregman proximal
  gradient algorithms
Regularized Rényi divergence minimization through Bregman proximal gradient algorithms
Thomas Guilmeau
Émilie Chouzenoux
Victor Elvira
29
3
0
09 Nov 2022
Optimization for Amortized Inverse Problems
Optimization for Amortized Inverse Problems
Tianci Liu
Tong Yang
Quan Zhang
Qi Lei
26
4
0
25 Oct 2022
Adaptive Top-K in SGD for Communication-Efficient Distributed Learning
Adaptive Top-K in SGD for Communication-Efficient Distributed Learning
Mengzhe Ruan
Guangfeng Yan
Yuanzhang Xiao
Linqi Song
Weitao Xu
24
3
0
24 Oct 2022
From Gradient Flow on Population Loss to Learning with Stochastic
  Gradient Descent
From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent
Satyen Kale
Jason D. Lee
Chris De Sa
Ayush Sekhari
Karthik Sridharan
21
4
0
13 Oct 2022
Spectral Regularization Allows Data-frugal Learning over Combinatorial
  Spaces
Spectral Regularization Allows Data-frugal Learning over Combinatorial Spaces
Amirali Aghazadeh
Nived Rajaraman
Tony Tu
Kannan Ramchandran
17
2
0
05 Oct 2022
Over-the-Air Federated Learning with Privacy Protection via Correlated
  Additive Perturbations
Over-the-Air Federated Learning with Privacy Protection via Correlated Additive Perturbations
Jialing Liao
Zheng Chen
Erik G. Larsson
13
12
0
05 Oct 2022
Behind the Scenes of Gradient Descent: A Trajectory Analysis via Basis
  Function Decomposition
Behind the Scenes of Gradient Descent: A Trajectory Analysis via Basis Function Decomposition
Jianhao Ma
Li-Zhen Guo
S. Fattahi
34
4
0
01 Oct 2022
Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
  with List Stability
Exploring the Algorithm-Dependent Generalization of AUPRC Optimization with List Stability
Peisong Wen
Qianqian Xu
Zhiyong Yang
Yuan He
Qingming Huang
48
10
0
27 Sep 2022
Efficiency Ordering of Stochastic Gradient Descent
Efficiency Ordering of Stochastic Gradient Descent
Jie Hu
Vishwaraj Doshi
Do Young Eun
28
6
0
15 Sep 2022
Statistical Learning Theory for Control: A Finite Sample Perspective
Statistical Learning Theory for Control: A Finite Sample Perspective
Anastasios Tsiamis
Ingvar M. Ziemann
Nikolai Matni
George J. Pappas
23
73
0
12 Sep 2022
Improved Policy Optimization for Online Imitation Learning
Improved Policy Optimization for Online Imitation Learning
J. Lavington
Sharan Vaswani
Mark W. Schmidt
OffRL
13
6
0
29 Jul 2022
Sampling Attacks on Meta Reinforcement Learning: A Minimax Formulation
  and Complexity Analysis
Sampling Attacks on Meta Reinforcement Learning: A Minimax Formulation and Complexity Analysis
Tao Li
Haozhe Lei
Quanyan Zhu
AAML
19
7
0
29 Jul 2022
Multi-block-Single-probe Variance Reduced Estimator for Coupled
  Compositional Optimization
Multi-block-Single-probe Variance Reduced Estimator for Coupled Compositional Optimization
Wei Jiang
Gang Li
Yibo Wang
Lijun Zhang
Tianbao Yang
27
16
0
18 Jul 2022
Training Robust Deep Models for Time-Series Domain: Novel Algorithms and
  Theoretical Analysis
Training Robust Deep Models for Time-Series Domain: Novel Algorithms and Theoretical Analysis
Taha Belkhouja
Yan Yan
J. Doppa
OOD
AI4TS
19
9
0
09 Jul 2022
Provable Acceleration of Heavy Ball beyond Quadratics for a Class of
  Polyak-Łojasiewicz Functions when the Non-Convexity is Averaged-Out
Provable Acceleration of Heavy Ball beyond Quadratics for a Class of Polyak-Łojasiewicz Functions when the Non-Convexity is Averaged-Out
Jun-Kun Wang
Chi-Heng Lin
Andre Wibisono
Bin Hu
19
20
0
22 Jun 2022
Understanding the Generalization Benefit of Normalization Layers:
  Sharpness Reduction
Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction
Kaifeng Lyu
Zhiyuan Li
Sanjeev Arora
FAtt
35
69
0
14 Jun 2022
Towards Understanding Sharpness-Aware Minimization
Towards Understanding Sharpness-Aware Minimization
Maksym Andriushchenko
Nicolas Flammarion
AAML
24
133
0
13 Jun 2022
On the Convergence to a Global Solution of Shuffling-Type Gradient
  Algorithms
On the Convergence to a Global Solution of Shuffling-Type Gradient Algorithms
Lam M. Nguyen
Trang H. Tran
32
2
0
13 Jun 2022
Theoretical Error Performance Analysis for Variational Quantum Circuit
  Based Functional Regression
Theoretical Error Performance Analysis for Variational Quantum Circuit Based Functional Regression
Jun Qi
Chao-Han Huck Yang
Pin-Yu Chen
Min-hsiu Hsieh
28
50
0
08 Jun 2022
Learning from time-dependent streaming data with online stochastic
  algorithms
Learning from time-dependent streaming data with online stochastic algorithms
Antoine Godichon-Baggioni
Nicklas Werge
Olivier Wintenberger
22
3
0
25 May 2022
Beyond Lipschitz: Sharp Generalization and Excess Risk Bounds for
  Full-Batch GD
Beyond Lipschitz: Sharp Generalization and Excess Risk Bounds for Full-Batch GD
Konstantinos E. Nikolakakis
Farzin Haddadpour
Amin Karbasi
Dionysios S. Kalogerias
31
17
0
26 Apr 2022
Sharper Utility Bounds for Differentially Private Models
Sharper Utility Bounds for Differentially Private Models
Yilin Kang
Yong Liu
Jian Li
Weiping Wang
FedML
19
3
0
22 Apr 2022
Convergence of gradient descent for deep neural networks
Convergence of gradient descent for deep neural networks
S. Chatterjee
ODL
19
20
0
30 Mar 2022
A Local Convergence Theory for the Stochastic Gradient Descent Method in
  Non-Convex Optimization With Non-isolated Local Minima
A Local Convergence Theory for the Stochastic Gradient Descent Method in Non-Convex Optimization With Non-isolated Local Minima
Tae-Eon Ko
Xiantao Li
20
2
0
21 Mar 2022
Federated Minimax Optimization: Improved Convergence Analyses and
  Algorithms
Federated Minimax Optimization: Improved Convergence Analyses and Algorithms
Pranay Sharma
Rohan Panda
Gauri Joshi
P. Varshney
FedML
19
46
0
09 Mar 2022
Tackling benign nonconvexity with smoothing and stochastic gradients
Tackling benign nonconvexity with smoothing and stochastic gradients
Harsh Vardhan
Sebastian U. Stich
18
8
0
18 Feb 2022
Delay-adaptive step-sizes for asynchronous learning
Delay-adaptive step-sizes for asynchronous learning
Xuyang Wu
Sindri Magnússon
Hamid Reza Feyzmahdavian
M. Johansson
23
14
0
17 Feb 2022
Optimal Algorithms for Stochastic Multi-Level Compositional Optimization
Optimal Algorithms for Stochastic Multi-Level Compositional Optimization
Wei Jiang
Bokun Wang
Yibo Wang
Lijun Zhang
Tianbao Yang
74
17
0
15 Feb 2022
Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity
  Guarantees for Langevin Monte Carlo
Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo
Krishnakumar Balasubramanian
Sinho Chewi
Murat A. Erdogdu
Adil Salim
Matthew Shunshi Zhang
35
60
0
10 Feb 2022
PAGE-PG: A Simple and Loopless Variance-Reduced Policy Gradient Method
  with Probabilistic Gradient Estimation
PAGE-PG: A Simple and Loopless Variance-Reduced Policy Gradient Method with Probabilistic Gradient Estimation
Matilde Gargiani
Andrea Zanelli
Andrea Martinelli
Tyler H. Summers
John Lygeros
33
14
0
01 Feb 2022
Differentially Private SGDA for Minimax Problems
Differentially Private SGDA for Minimax Problems
Zhenhuan Yang
Shu Hu
Yunwen Lei
Kush R. Varshney
Siwei Lyu
Yiming Ying
31
19
0
22 Jan 2022
Convergence Rates of Two-Time-Scale Gradient Descent-Ascent Dynamics for
  Solving Nonconvex Min-Max Problems
Convergence Rates of Two-Time-Scale Gradient Descent-Ascent Dynamics for Solving Nonconvex Min-Max Problems
Thinh T. Doan
18
15
0
17 Dec 2021
Convergence proof for stochastic gradient descent in the training of
  deep neural networks with ReLU activation for constant target functions
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions
Martin Hutzenthaler
Arnulf Jentzen
Katharina Pohl
Adrian Riekert
Luca Scarpa
MLT
32
6
0
13 Dec 2021
Breaking the Convergence Barrier: Optimization via Fixed-Time Convergent
  Flows
Breaking the Convergence Barrier: Optimization via Fixed-Time Convergent Flows
Param Budhraja
Mayank Baranwal
Kunal Garg
A. Hota
13
9
0
02 Dec 2021
Linear Speedup in Personalized Collaborative Learning
Linear Speedup in Personalized Collaborative Learning
El Mahdi Chayti
Sai Praneeth Karimireddy
Sebastian U. Stich
Nicolas Flammarion
Martin Jaggi
FedML
13
13
0
10 Nov 2021
Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic
  Gradient Descent
Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic Gradient Descent
Sharan Vaswani
Benjamin Dubois-Taine
Reza Babanezhad
43
11
0
21 Oct 2021
A Unified and Refined Convergence Analysis for Non-Convex Decentralized
  Learning
A Unified and Refined Convergence Analysis for Non-Convex Decentralized Learning
Sulaiman A. Alghunaim
Kun Yuan
17
57
0
19 Oct 2021
A global convergence theory for deep ReLU implicit networks via
  over-parameterization
A global convergence theory for deep ReLU implicit networks via over-parameterization
Tianxiang Gao
Hailiang Liu
Jia Liu
Hridesh Rajan
Hongyang Gao
MLT
23
16
0
11 Oct 2021
DiNNO: Distributed Neural Network Optimization for Multi-Robot
  Collaborative Learning
DiNNO: Distributed Neural Network Optimization for Multi-Robot Collaborative Learning
Javier Yu
Joseph A. Vincent
Mac Schwager
38
35
0
17 Sep 2021
Bundled Gradients through Contact via Randomized Smoothing
Bundled Gradients through Contact via Randomized Smoothing
H. Suh
Tao Pang
Russ Tedrake
78
52
0
11 Sep 2021
Iterated Vector Fields and Conservatism, with Applications to Federated
  Learning
Iterated Vector Fields and Conservatism, with Applications to Federated Learning
Zachary B. Charles
Keith Rush
19
6
0
08 Sep 2021
Previous
1234
Next