ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.02856
  4. Cited By
Doubly Optimal No-Regret Online Learning in Strongly Monotone Games with
  Bandit Feedback
v1v2v3v4 (latest)

Doubly Optimal No-Regret Online Learning in Strongly Monotone Games with Bandit Feedback

6 December 2021
Wenjia Ba
Tianyi Lin
Jiawei Zhang
Zhengyuan Zhou
ArXiv (abs)PDFHTML

Papers citing "Doubly Optimal No-Regret Online Learning in Strongly Monotone Games with Bandit Feedback"

16 / 16 papers shown
Title
Last-iterate Convergence in Extensive-Form Games
Last-iterate Convergence in Extensive-Form Games
Chung-Wei Lee
Christian Kroer
Haipeng Luo
172
40
0
27 Jun 2021
Finite-Time Last-Iterate Convergence for Multi-Agent Learning in Games
Finite-Time Last-Iterate Convergence for Multi-Agent Learning in Games
Tianyi Lin
Zhengyuan Zhou
P. Mertikopoulos
Michael I. Jordan
57
49
0
23 Feb 2020
Last Iterate is Slower than Averaged Iterate in Smooth Convex-Concave
  Saddle Point Problems
Last Iterate is Slower than Averaged Iterate in Smooth Convex-Concave Saddle Point Problems
Noah Golowich
S. Pattathil
C. Daskalakis
Asuman Ozdaglar
46
105
0
31 Jan 2020
Introduction to Online Convex Optimization
Introduction to Online Convex Optimization
Elad Hazan
OffRL
188
1,939
0
07 Sep 2019
On the convergence of single-call stochastic extra-gradient methods
On the convergence of single-call stochastic extra-gradient methods
Yu-Guan Hsieh
F. Iutzeler
J. Malick
P. Mertikopoulos
69
169
0
22 Aug 2019
A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for
  Saddle Point Problems: Proximal Point Approach
A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach
Aryan Mokhtari
Asuman Ozdaglar
S. Pattathil
100
328
0
24 Jan 2019
Bandit learning in concave $N$-person games
Bandit learning in concave NNN-person games
Mario Bravo
David S. Leslie
P. Mertikopoulos
48
122
0
03 Oct 2018
Optimistic mirror descent in saddle-point problems: Going the extra
  (gradient) mile
Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile
P. Mertikopoulos
Bruno Lecouat
Houssam Zenati
Chuan-Sheng Foo
V. Chandrasekhar
Georgios Piliouras
135
297
0
07 Jul 2018
Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave
  Saddle Point Problems without Strong Convexity
Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity
S. Du
Wei Hu
99
121
0
05 Feb 2018
Training GANs with Optimism
Training GANs with Optimism
C. Daskalakis
Andrew Ilyas
Vasilis Syrgkanis
Haoyang Zeng
175
519
0
31 Oct 2017
Cycles in adversarial regularized learning
Cycles in adversarial regularized learning
P. Mertikopoulos
Christos H. Papadimitriou
Georgios Piliouras
66
322
0
08 Sep 2017
Learning in games with continuous action sets and unknown payoff
  functions
Learning in games with continuous action sets and unknown payoff functions
P. Mertikopoulos
Zhengyuan Zhou
62
266
0
25 Aug 2016
Distributed stochastic optimization via matrix exponential learning
Distributed stochastic optimization via matrix exponential learning
P. Mertikopoulos
E. V. Belmega
Romain Negrel
L. Sanguinetti
55
44
0
03 Jun 2016
Bandit Convex Optimization: sqrt{T} Regret in One Dimension
Bandit Convex Optimization: sqrt{T} Regret in One Dimension
Sébastien Bubeck
O. Dekel
Tomer Koren
Yuval Peres
126
36
0
23 Feb 2015
A distributed block coordinate descent method for training $l_1$
  regularized linear classifiers
A distributed block coordinate descent method for training l1l_1l1​ regularized linear classifiers
D. Mahajan
S. Keerthi
S. Sundararajan
175
35
0
18 May 2014
On the Complexity of Bandit and Derivative-Free Stochastic Convex
  Optimization
On the Complexity of Bandit and Derivative-Free Stochastic Convex Optimization
Ohad Shamir
417
193
0
11 Sep 2012
1