18
2

Adaptive Multi-Goal Exploration

Abstract

We introduce a generic strategy for provably efficient multi-goal exploration. It relies on AdaGoal, a novel goal selection scheme that leverages a measure of uncertainty in reaching states to adaptively target goals that are neither too difficult nor too easy. We show how AdaGoal can be used to tackle the objective of learning an ϵ\epsilon-optimal goal-conditioned policy for the (initially unknown) set of goal states that are reachable within LL steps in expectation from a reference state s0s_0 in a reward-free Markov decision process. In the tabular case with SS states and AA actions, our algorithm requires O~(L3SAϵ2)\tilde{O}(L^3 S A \epsilon^{-2}) exploration steps, which is nearly minimax optimal. We also readily instantiate AdaGoal in linear mixture Markov decision processes, yielding the first goal-oriented PAC guarantee with linear function approximation. Beyond its strong theoretical guarantees, we anchor AdaGoal in goal-conditioned deep reinforcement learning, both conceptually and empirically, by connecting its idea of selecting "uncertain" goals to maximizing value ensemble disagreement.

View on arXiv
Comments on this paper