14
95

Q-learning with UCB Exploration is Sample Efficient for Infinite-Horizon MDP

Abstract

A fundamental question in reinforcement learning is whether model-free algorithms are sample efficient. Recently, Jin et al. \cite{jin2018q} proposed a Q-learning algorithm with UCB exploration policy, and proved it has nearly optimal regret bound for finite-horizon episodic MDP. In this paper, we adapt Q-learning with UCB-exploration bonus to infinite-horizon MDP with discounted rewards \emph{without} accessing a generative model. We show that the \textit{sample complexity of exploration} of our algorithm is bounded by O~(SAϵ2(1γ)7)\tilde{O}({\frac{SA}{\epsilon^2(1-\gamma)^7}}). This improves the previously best known result of O~(SAϵ4(1γ)8)\tilde{O}({\frac{SA}{\epsilon^4(1-\gamma)^8}}) in this setting achieved by delayed Q-learning \cite{strehl2006pac}, and matches the lower bound in terms of ϵ\epsilon as well as SS and AA except for logarithmic factors.

View on arXiv
Comments on this paper