25
46

Regret Bounds for Reinforcement Learning via Markov Chain Concentration

Abstract

We give a simple optimistic algorithm for which it is easy to derive regret bounds of O~(tmixSAT)\tilde{O}(\sqrt{t_{\rm mix} SAT}) after TT steps in uniformly ergodic Markov decision processes with SS states, AA actions, and mixing time parameter tmixt_{\rm mix}. These bounds are the first regret bounds in the general, non-episodic setting with an optimal dependence on all given parameters. They could only be improved by using an alternative mixing time parameter.

View on arXiv
Comments on this paper