23
10

Model-Based Reinforcement Learning for Offline Zero-Sum Markov Games

Abstract

This paper makes progress towards learning Nash equilibria in two-player zero-sum Markov games from offline data. Specifically, consider a γ\gamma-discounted infinite-horizon Markov game with SS states, where the max-player has AA actions and the min-player has BB actions. We propose a pessimistic model-based algorithm with Bernstein-style lower confidence bounds -- called VI-LCB-Game -- that provably finds an ε\varepsilon-approximate Nash equilibrium with a sample complexity no larger than CclippedS(A+B)(1γ)3ε2\frac{C_{\mathsf{clipped}}^{\star}S(A+B)}{(1-\gamma)^{3}\varepsilon^{2}} (up to some log factor). Here, CclippedC_{\mathsf{clipped}}^{\star} is some unilateral clipped concentrability coefficient that reflects the coverage and distribution shift of the available data (vis-\`a-vis the target data), and the target accuracy ε\varepsilon can be any value within (0,11γ]\big(0,\frac{1}{1-\gamma}\big]. Our sample complexity bound strengthens prior art by a factor of min{A,B}\min\{A,B\}, achieving minimax optimality for the entire ε\varepsilon-range. An appealing feature of our result lies in algorithmic simplicity, which reveals the unnecessity of variance reduction and sample splitting in achieving sample optimality.

View on arXiv
Comments on this paper