72
0

Corrupted Learning Dynamics in Games

Abstract

Learning in games is the problem where multiple players interact in a shared environment, each aiming to minimize their own regret, and it is known that an approximate equilibrium can be obtained when all players employ no-regret algorithms. Notably, by adopting optimistic follow-the-regularized-leader (OFTRL), the regret of each player after TT rounds is constant in two-player zero-sum games, implying that an equilibrium can be computed at a faster rate of O(1/T)O(1/T). However, this acceleration is limited to the honest regime, where all players fully adhere to the given algorithms. To address this limitation, this paper presents corrupted learning dynamics that adaptively find an equilibrium at a rate dependent on the degree of deviation by each player from the given algorithm's output. First, in two-player zero-sum games, we provide learning dynamics where the external regret of the x-player (and similarly for the y-player) in the corrupted regime is roughly bounded by O(log(mxmy)+Cy+Cx)O(\log (m_\mathrm{x} m_\mathrm{y}) + \sqrt{C_\mathrm{y}} + C_\mathrm{x}), which implies a convergence rate of O~((Cy+Cx)/T)\tilde{O}((\sqrt{C_\mathrm{y}} + C_\mathrm{x})/T) to a Nash equilibrium. Here, mxm_\mathrm{x} and mym_\mathrm{y} are the number of actions of the x- and y-players, respectively, and CxC_\mathrm{x} and CyC_\mathrm{y} are the cumulative deviations of the x- and y-players from their given algorithms. Furthermore, we extend our approach to multi-player general-sum games, showing that the swap regret of player ii in the corrupted regime is bounded by O(logT+jCjlogT+Ci)O(\log T + \sqrt{\sum_j C_j \log T} + C_i), where CiC_i is the cumulative deviations of player ii from the given algorithm. This implies a convergence rate of O((logT+jCjlogT+Ci)/T)O((\log T + \sqrt{\sum_j C_j \log T} + C_i)/T) to a correlated equilibrium. Our learning dynamics are agnostic to the corruption levels and are based on OFTRL with new adaptive learning rates.

View on arXiv
Comments on this paper