ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.03579
94
7
v1v2v3 (latest)

Forward Looking Best-Response Multiplicative Weights Update Methods

7 June 2021
M. Fasoulakis
E. Markakis
Yannis Pantazis
Constantinos Varsos
ArXiv (abs)PDFHTML
Abstract

We propose a novel variant of the \emph{multiplicative weights update method} with forward-looking best-response strategies, that guarantees last-iterate convergence for \emph{zero-sum games} with a unique \emph{Nash equilibrium}. Particularly, we show that the proposed algorithm converges to an η1/ρ\eta^{1/\rho}η1/ρ-approximate Nash equilibrium, with ρ>1\rho > 1ρ>1, by decreasing the Kullback-Leibler divergence of each iterate by a rate of at least Ω(η1+1ρ)\Omega(\eta^{1+\frac{1}{\rho}})Ω(η1+ρ1​), for sufficiently small learning rate η\etaη. When our method enters a sufficiently small neighborhood of the solution, it becomes a contraction and converges to the Nash equilibrium of the game. Furthermore, we perform an experimental comparison with the recently proposed optimistic variant of the multiplicative weights update method, by \cite{Daskalakis2019LastIterateCZ}, which has also been proved to attain last-iterate convergence. Our findings reveal that our algorithm offers substantial gains both in terms of the convergence rate and the region of contraction relative to the previous approach.

View on arXiv
Comments on this paper