33
31

Towards Tight Bounds on the Sample Complexity of Average-reward MDPs

Abstract

We prove new upper and lower bounds for sample complexity of finding an ϵ\epsilon-optimal policy of an infinite-horizon average-reward Markov decision process (MDP) given access to a generative model. When the mixing time of the probability transition matrix of all policies is at most tmixt_\mathrm{mix}, we provide an algorithm that solves the problem using O~(tmixϵ3)\widetilde{O}(t_\mathrm{mix} \epsilon^{-3}) (oblivious) samples per state-action pair. Further, we provide a lower bound showing that a linear dependence on tmixt_\mathrm{mix} is necessary in the worst case for any algorithm which computes oblivious samples. We obtain our results by establishing connections between infinite-horizon average-reward MDPs and discounted MDPs of possible further utility.

View on arXiv
Comments on this paper