Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.09361
Cited By
Gradient Free Minimax Optimization: Variance Reduction and Faster Convergence
16 June 2020
Tengyu Xu
Zhe Wang
Yingbin Liang
H. Vincent Poor
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Gradient Free Minimax Optimization: Variance Reduction and Faster Convergence"
8 / 8 papers shown
Title
Single-loop Stochastic Algorithms for Difference of Max-Structured Weakly Convex Functions
Quanqi Hu
Qi Qi
Zhaosong Lu
Tianbao Yang
39
2
0
28 May 2024
PRECISION: Decentralized Constrained Min-Max Learning with Low Communication and Sample Complexities
Zhuqing Liu
Xin Zhang
Songtao Lu
Jia-Wei Liu
40
7
0
05 Mar 2023
Decentralized Stochastic Gradient Descent Ascent for Finite-Sum Minimax Problems
Hongchang Gao
24
16
0
06 Dec 2022
Zeroth-Order Alternating Gradient Descent Ascent Algorithms for a Class of Nonconvex-Nonconcave Minimax Problems
Zi Xu
Ziqi Wang
Junlin Wang
Y. Dai
18
11
0
24 Nov 2022
The Complexity of Nonconvex-Strongly-Concave Minimax Optimization
Siqi Zhang
Junchi Yang
Cristóbal Guzmán
Negar Kiyavash
Niao He
33
61
0
29 Mar 2021
Zeroth-Order Algorithms for Nonconvex Minimax Problems with Improved Complexities
Zhongruo Wang
Krishnakumar Balasubramanian
Shiqian Ma
Meisam Razaviyayn
16
25
0
22 Jan 2020
Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity
S. Du
Wei Hu
58
120
0
05 Feb 2018
Max-value Entropy Search for Efficient Bayesian Optimization
Zi Wang
Stefanie Jegelka
110
403
0
06 Mar 2017
1