ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.15240
24
7

Near-Optimal Policy Optimization for Correlated Equilibrium in General-Sum Markov Games

26 January 2024
Yang Cai
Haipeng Luo
Chen-Yu Wei
Weiqiang Zheng
ArXivPDFHTML
Abstract

We study policy optimization algorithms for computing correlated equilibria in multi-player general-sum Markov Games. Previous results achieve O(T−1/2)O(T^{-1/2})O(T−1/2) convergence rate to a correlated equilibrium and an accelerated O(T−3/4)O(T^{-3/4})O(T−3/4) convergence rate to the weaker notion of coarse correlated equilibrium. In this paper, we improve both results significantly by providing an uncoupled policy optimization algorithm that attains a near-optimal O~(T−1)\tilde{O}(T^{-1})O~(T−1) convergence rate for computing a correlated equilibrium. Our algorithm is constructed by combining two main elements (i) smooth value updates and (ii) the optimistic-follow-the-regularized-leader algorithm with the log barrier regularizer.

View on arXiv
Comments on this paper