ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.09806
18
49

Finite-Time Last-Iterate Convergence for Multi-Agent Learning in Games

23 February 2020
Tianyi Lin
Zhengyuan Zhou
P. Mertikopoulos
Michael I. Jordan
ArXivPDFHTML
Abstract

In this paper, we consider multi-agent learning via online gradient descent in a class of games called λ\lambdaλ-cocoercive games, a fairly broad class of games that admits many Nash equilibria and that properly includes unconstrained strongly monotone games. We characterize the finite-time last-iterate convergence rate for joint OGD learning on λ\lambdaλ-cocoercive games; further, building on this result, we develop a fully adaptive OGD learning algorithm that does not require any knowledge of problem parameter (e.g. cocoercive constant λ\lambdaλ) and show, via a novel double-stopping time technique, that this adaptive algorithm achieves same finite-time last-iterate convergence rate as non-adaptive counterpart. Subsequently, we extend OGD learning to the noisy gradient feedback case and establish last-iterate convergence results -- first qualitative almost sure convergence, then quantitative finite-time convergence rates -- all under non-decreasing step-sizes. To our knowledge, we provide the first set of results that fill in several gaps of the existing multi-agent online learning literature, where three aspects -- finite-time convergence rates, non-decreasing step-sizes, and fully adaptive algorithms have been unexplored before.

View on arXiv
Comments on this paper