64
12

Provably Fast Convergence of Independent Natural Policy Gradient for Markov Potential Games

Abstract

This work studies an independent natural policy gradient (NPG) algorithm for the multi-agent reinforcement learning problem in Markov potential games. It is shown that, under mild technical assumptions and the introduction of the suboptimality gap, the independent NPG method with an oracle providing exact policy evaluation asymptotically reaches an ϵ\epsilon-Nash Equilibrium (NE) within O(1/ϵ)\mathcal{O}(1/\epsilon) iterations. This improves upon the previous best result of O(1/ϵ2)\mathcal{O}(1/\epsilon^2) iterations and is of the same order, O(1/ϵ)\mathcal{O}(1/\epsilon), that is achievable for the single-agent case. Empirical results for a synthetic potential game and a congestion game are presented to verify the theoretical bounds.

View on arXiv
Comments on this paper