Efficiently Solving MDPs with Stochastic Mirror Descent

We present a unified framework based on primal-dual stochastic mirror descent for approximately solving infinite-horizon Markov decision processes (MDPs) given a generative model. When applied to an average-reward MDP with total state-action pairs and mixing time bound our method computes an -optimal policy with an expected samples from the state-transition matrix, removing the ergodicity dependence of prior art. When applied to a -discounted MDP with total state-action pairs our method computes an -optimal policy with an expected samples, matching the previous state-of-the-art up to a factor. Both methods are model-free, update state values and policies simultaneously, and run in time linear in the number of samples taken. We achieve these results through a more general stochastic mirror descent framework for solving bilinear saddle-point problems with simplex and box domains and we demonstrate the flexibility of this framework by providing further applications to constrained MDPs.
View on arXiv