43
32
v1v2 (latest)

Near-optimal Optimistic Reinforcement Learning using Empirical Bernstein Inequalities

Abstract

We study model-based reinforcement learning in an unknown finite communicating Markov decision process. We propose a simple algorithm that leverages a variance based confidence interval. We show that the proposed algorithm, UCRL-V, achieves the optimal regret O~(DSAT)\tilde{\mathcal{O}}(\sqrt{DSAT}) up to logarithmic factors, and so our work closes a gap with the lower bound without additional assumptions on the MDP. We perform experiments in a variety of environments that validates the theoretical bounds as well as prove UCRL-V to be better than the state-of-the-art algorithms.

View on arXiv
Comments on this paper