ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.13120
39
11

Doubly Optimal No-Regret Learning in Monotone Games

30 January 2023
Yang Cai
Weiqiang Zheng
ArXivPDFHTML
Abstract

We consider online learning in multi-player smooth monotone games. Existing algorithms have limitations such as (1) being only applicable to strongly monotone games; (2) lacking the no-regret guarantee; (3) having only asymptotic or slow O(1T)O(\frac{1}{\sqrt{T}})O(T​1​) last-iterate convergence rate to a Nash equilibrium. While the O(1T)O(\frac{1}{\sqrt{T}})O(T​1​) rate is tight for a large class of algorithms including the well-studied extragradient algorithm and optimistic gradient algorithm, it is not optimal for all gradient-based algorithms. We propose the accelerated optimistic gradient (AOG) algorithm, the first doubly optimal no-regret learning algorithm for smooth monotone games. Namely, our algorithm achieves both (i) the optimal O(T)O(\sqrt{T})O(T​) regret in the adversarial setting under smooth and convex loss functions and (ii) the optimal O(1T)O(\frac{1}{T})O(T1​) last-iterate convergence rate to a Nash equilibrium in multi-player smooth monotone games. As a byproduct of the accelerated last-iterate convergence rate, we further show that each player suffers only an O(log⁡T)O(\log T)O(logT) individual worst-case dynamic regret, providing an exponential improvement over the previous state-of-the-art O(T)O(\sqrt{T})O(T​) bound.

View on arXiv
Comments on this paper