UCB Exploration via Q-Ensembles
- OffRL

Abstract
We show how an ensemble of -functions can be leveraged for more effective exploration in deep reinforcement learning. We build on well established algorithms from the bandit setting, and adapt them to the -learning setting. We propose an exploration strategy based on upper-confidence bounds (UCB). Our experiments show significant gains on the Atari benchmark.
View on arXivComments on this paper