ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.01504
  4. Cited By
Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave
  Saddle Point Problems without Strong Convexity

Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity

5 February 2018
S. Du
Wei Hu
ArXivPDFHTML

Papers citing "Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity"

48 / 48 papers shown
Title
Contractivity and linear convergence in bilinear saddle-point problems: An operator-theoretic approach
Contractivity and linear convergence in bilinear saddle-point problems: An operator-theoretic approach
Colin Dirren
Mattia Bianchi
Panagiotis D. Grontas
John Lygeros
Florian Dorfler
36
0
0
18 Oct 2024
Accelerating Distributed Optimization: A Primal-Dual Perspective on
  Local Steps
Accelerating Distributed Optimization: A Primal-Dual Perspective on Local Steps
Junchi Yang
Murat Yildirim
Qiu Feng
44
0
0
02 Jul 2024
First-Order Methods for Linearly Constrained Bilevel Optimization
First-Order Methods for Linearly Constrained Bilevel Optimization
Guy Kornowski
Swati Padmanabhan
Kai Wang
Zhe Zhang
S. Sra
78
5
0
18 Jun 2024
A Primal-Dual-Assisted Penalty Approach to Bilevel Optimization with
  Coupled Constraints
A Primal-Dual-Assisted Penalty Approach to Bilevel Optimization with Coupled Constraints
Liuyuan Jiang
Quan-Wu Xiao
Victor M. Tenorio
Fernando Real-Rojas
Antonio G. Marques
Tianyi Chen
50
2
0
14 Jun 2024
Convergences for Minimax Optimization Problems over Infinite-Dimensional
  Spaces Towards Stability in Adversarial Training
Convergences for Minimax Optimization Problems over Infinite-Dimensional Spaces Towards Stability in Adversarial Training
Takashi Furuya
Satoshi Okuda
Kazuma Suetake
Yoshihide Sawada
27
0
0
02 Dec 2023
Local Convergence of Gradient Methods for Min-Max Games: Partial
  Curvature Generically Suffices
Local Convergence of Gradient Methods for Min-Max Games: Partial Curvature Generically Suffices
Guillaume Wang
Lénaïc Chizat
24
0
0
26 May 2023
Nesterov Meets Optimism: Rate-Optimal Separable Minimax Optimization
Nesterov Meets Optimism: Rate-Optimal Separable Minimax Optimization
C. J. Li
An Yuan
Gauthier Gidel
Quanquan Gu
Michael I. Jordan
31
6
0
31 Oct 2022
On Convergence of Gradient Descent Ascent: A Tight Local Analysis
On Convergence of Gradient Descent Ascent: A Tight Local Analysis
Haochuan Li
Farzan Farnia
Subhro Das
Ali Jadbabaie
33
10
0
03 Jul 2022
Provable Constrained Stochastic Convex Optimization with XOR-Projected
  Gradient Descent
Provable Constrained Stochastic Convex Optimization with XOR-Projected Gradient Descent
Fan Ding
Yijie Wang
Jianzhu Ma
Yexiang Xue
25
0
0
22 Mar 2022
Distributed saddle point problems for strongly concave-convex functions
Distributed saddle point problems for strongly concave-convex functions
Muhammad I. Qureshi
U. Khan
39
12
0
11 Feb 2022
Lifted Primal-Dual Method for Bilinearly Coupled Smooth Minimax
  Optimization
Lifted Primal-Dual Method for Bilinearly Coupled Smooth Minimax Optimization
K. K. Thekumparampil
Niao He
Sewoong Oh
28
29
0
19 Jan 2022
Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave
  Saddle-Point Problems with Bilinear Coupling
Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave Saddle-Point Problems with Bilinear Coupling
D. Kovalev
Alexander Gasnikov
Peter Richtárik
40
33
0
30 Dec 2021
Accelerated Proximal Alternating Gradient-Descent-Ascent for Nonconvex
  Minimax Machine Learning
Accelerated Proximal Alternating Gradient-Descent-Ascent for Nonconvex Minimax Machine Learning
Ziyi Chen
Shaocong Ma
Yi Zhou
25
8
0
22 Dec 2021
Doubly Optimal No-Regret Online Learning in Strongly Monotone Games with
  Bandit Feedback
Doubly Optimal No-Regret Online Learning in Strongly Monotone Games with Bandit Feedback
Wenjia Ba
Tianyi Lin
Jiawei Zhang
Zhengyuan Zhou
24
9
0
06 Dec 2021
Joint inference and input optimization in equilibrium networks
Joint inference and input optimization in equilibrium networks
Swaminathan Gurumurthy
Shaojie Bai
Zachary Manchester
J. Zico Kolter
32
19
0
25 Nov 2021
A Cubic Regularization Approach for Finding Local Minimax Points in
  Nonconvex Minimax Optimization
A Cubic Regularization Approach for Finding Local Minimax Points in Nonconvex Minimax Optimization
Ziyi Chen
Zhengyang Hu
Qunwei Li
Zhe Wang
Yi Zhou
42
7
0
14 Oct 2021
Safe Pontryagin Differentiable Programming
Safe Pontryagin Differentiable Programming
Wanxin Jin
Shaoshuai Mou
George J. Pappas
25
39
0
31 May 2021
Fast Distributionally Robust Learning with Variance Reduced Min-Max
  Optimization
Fast Distributionally Robust Learning with Variance Reduced Min-Max Optimization
Yaodong Yu
Tianyi Lin
Eric Mazumdar
Michael I. Jordan
OOD
35
22
0
27 Apr 2021
Local Stochastic Gradient Descent Ascent: Convergence Analysis and
  Communication Efficiency
Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency
Yuyang Deng
M. Mahdavi
30
58
0
25 Feb 2021
Instrumental Variable Value Iteration for Causal Offline Reinforcement
  Learning
Instrumental Variable Value Iteration for Causal Offline Reinforcement Learning
Luofeng Liao
Zuyue Fu
Zhuoran Yang
Yixin Wang
Mladen Kolar
Zhaoran Wang
OffRL
18
34
0
19 Feb 2021
Efficient Algorithms for Federated Saddle Point Optimization
Efficient Algorithms for Federated Saddle Point Optimization
Charlie Hou
K. K. Thekumparampil
Giulia Fanti
Sewoong Oh
FedML
29
23
0
12 Feb 2021
Local and Global Uniform Convexity Conditions
Local and Global Uniform Convexity Conditions
Thomas Kerdreux
Alexandre d’Aspremont
Sebastian Pokutta
17
12
0
09 Feb 2021
Proximal Gradient Descent-Ascent: Variable Convergence under KŁ
  Geometry
Proximal Gradient Descent-Ascent: Variable Convergence under KŁ Geometry
Ziyi Chen
Yi Zhou
Tengyu Xu
Yingbin Liang
17
34
0
09 Feb 2021
On Convergence of Gradient Expected Sarsa($λ$)
On Convergence of Gradient Expected Sarsa(λλλ)
Long Yang
Gang Zheng
Yu Zhang
Qian Zheng
Pengfei Li
Gang Pan
21
2
0
14 Dec 2020
Train simultaneously, generalize better: Stability of gradient-based
  minimax learners
Train simultaneously, generalize better: Stability of gradient-based minimax learners
Farzan Farnia
Asuman Ozdaglar
31
47
0
23 Oct 2020
Novel min-max reformulations of Linear Inverse Problems
Novel min-max reformulations of Linear Inverse Problems
Mohammed Rayyan Sheriff
Debasish Chatterjee
20
1
0
05 Jul 2020
Gradient Free Minimax Optimization: Variance Reduction and Faster
  Convergence
Gradient Free Minimax Optimization: Variance Reduction and Faster Convergence
Tengyu Xu
Zhe Wang
Yingbin Liang
H. Vincent Poor
26
30
0
16 Jun 2020
Cumulant GAN
Cumulant GAN
Yannis Pantazis
D. Paul
M. Fasoulakis
Y. Stylianou
M. Katsoulakis
GAN
14
18
0
11 Jun 2020
Improved Algorithms for Convex-Concave Minimax Optimization
Improved Algorithms for Convex-Concave Minimax Optimization
Yuanhao Wang
Jian Li
16
62
0
11 Jun 2020
Global Convergence and Variance-Reduced Optimization for a Class of
  Nonconvex-Nonconcave Minimax Problems
Global Convergence and Variance-Reduced Optimization for a Class of Nonconvex-Nonconcave Minimax Problems
Junchi Yang
Negar Kiyavash
Niao He
25
83
0
22 Feb 2020
An Optimal Multistage Stochastic Gradient Method for Minimax Problems
An Optimal Multistage Stochastic Gradient Method for Minimax Problems
Alireza Fallah
Asuman Ozdaglar
S. Pattathil
14
36
0
13 Feb 2020
An $O(s^r)$-Resolution ODE Framework for Understanding Discrete-Time
  Algorithms and Applications to the Linear Convergence of Minimax Problems
An O(sr)O(s^r)O(sr)-Resolution ODE Framework for Understanding Discrete-Time Algorithms and Applications to the Linear Convergence of Minimax Problems
Haihao Lu
28
6
0
23 Jan 2020
Optimization and Learning with Information Streams: Time-varying
  Algorithms and Applications
Optimization and Learning with Information Streams: Time-varying Algorithms and Applications
E. Dall’Anese
Andrea Simonetto
Stephen Becker
Liam Madden
25
69
0
17 Oct 2019
Stochastic Variance Reduced Primal Dual Algorithms for Empirical
  Composition Optimization
Stochastic Variance Reduced Primal Dual Algorithms for Empirical Composition Optimization
Adithya M. Devraj
Jianshu Chen
25
13
0
22 Jul 2019
On the Global Convergence of Actor-Critic: A Case for Linear Quadratic
  Regulator with Ergodic Cost
On the Global Convergence of Actor-Critic: A Case for Linear Quadratic Regulator with Ergodic Cost
Zhuoran Yang
Yongxin Chen
Mingyi Hong
Zhaoran Wang
32
39
0
14 Jul 2019
Efficient Algorithms for Smooth Minimax Optimization
Efficient Algorithms for Smooth Minimax Optimization
K. K. Thekumparampil
Prateek Jain
Praneeth Netrapalli
Sewoong Oh
22
190
0
02 Jul 2019
Primal-Dual Block Frank-Wolfe
Primal-Dual Block Frank-Wolfe
Qi Lei
Jiacheng Zhuo
C. Caramanis
Inderjit S. Dhillon
A. Dimakis
23
0
0
06 Jun 2019
Last-iterate convergence rates for min-max optimization
Last-iterate convergence rates for min-max optimization
Jacob D. Abernethy
Kevin A. Lai
Andre Wibisono
16
73
0
05 Jun 2019
Convergence Rate of $\mathcal{O}(1/k)$ for Optimistic Gradient and
  Extra-gradient Methods in Smooth Convex-Concave Saddle Point Problems
Convergence Rate of O(1/k)\mathcal{O}(1/k)O(1/k) for Optimistic Gradient and Extra-gradient Methods in Smooth Convex-Concave Saddle Point Problems
Aryan Mokhtari
Asuman Ozdaglar
S. Pattathil
35
20
0
03 Jun 2019
On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems
On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems
Tianyi Lin
Chi Jin
Michael I. Jordan
11
499
0
02 Jun 2019
Stochastic Primal-Dual Algorithms with Faster Convergence than
  $O(1/\sqrt{T})$ for Problems without Bilinear Structure
Stochastic Primal-Dual Algorithms with Faster Convergence than O(1/T)O(1/\sqrt{T})O(1/T​) for Problems without Bilinear Structure
Yan Yan
Yi Tian Xu
Qihang Lin
Lijun Zhang
Tianbao Yang
22
35
0
23 Apr 2019
On Structured Filtering-Clustering: Global Error Bound and Optimal
  First-Order Algorithms
On Structured Filtering-Clustering: Global Error Bound and Optimal First-Order Algorithms
Nhat Ho
Tianyi Lin
Michael I. Jordan
33
2
0
16 Apr 2019
A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for
  Saddle Point Problems: Proximal Point Approach
A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach
Aryan Mokhtari
Asuman Ozdaglar
S. Pattathil
27
324
0
24 Jan 2019
On the Global Convergence of Imitation Learning: A Case for Linear
  Quadratic Regulator
On the Global Convergence of Imitation Learning: A Case for Linear Quadratic Regulator
Qi Cai
Mingyi Hong
Yongxin Chen
Zhaoran Wang
24
34
0
11 Jan 2019
Block Belief Propagation for Parameter Learning in Markov Random Fields
Block Belief Propagation for Parameter Learning in Markov Random Fields
You Lu
Zhiyuan Liu
Bert Huang
14
0
0
09 Nov 2018
Adversarial Label Learning
Adversarial Label Learning
Chidubem Arachie
Bert Huang
19
22
0
22 May 2018
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
Convex Sparse Matrix Factorizations
Convex Sparse Matrix Factorizations
Francis R. Bach
Julien Mairal
Jean Ponce
142
143
0
10 Dec 2008
1