Q-learning with UCB Exploration is Sample Efficient for Infinite-Horizon MDP

Abstract
A fundamental question in reinforcement learning is whether model-free algorithms are sample efficient. Recently, Jin et al. \cite{jin2018q} proposed a Q-learning algorithm with UCB exploration policy, and proved it has nearly optimal regret bound for finite-horizon episodic MDP. In this paper, we adapt Q-learning with UCB-exploration bonus to infinite-horizon MDP with discounted rewards \emph{without} accessing a generative model. We show that the \textit{sample complexity of exploration} of our algorithm is bounded by . This improves the previously best known result of in this setting achieved by delayed Q-learning \cite{strehl2006pac}, and matches the lower bound in terms of as well as and except for logarithmic factors.
View on arXivComments on this paper