77
6

UCB and InfoGain Exploration via Q\boldsymbol{Q}-Ensembles

Abstract

We show how an ensemble of QQ^*-functions can be leveraged for more effective exploration in deep reinforcement learning. We build on well established algorithms from the bandit setting, and adapt them to the QQ-learning setting. First we propose an exploration strategy based on upper-confidence bounds (UCB). Next, we define an ''InfoGain'' exploration bonus, which depends on the disagreement of the QQ-ensemble. Our experiments show significant gains on the Atari benchmark.

View on arXiv
Comments on this paper