53

Safe Exploration via Policy Priors

Manuel Wendl
Yarden As
Manish Prajapat
Anton Pollak
Stelian Coros
Andreas Krause
Main:10 Pages
19 Figures
Bibliography:6 Pages
2 Tables
Appendix:25 Pages
Abstract

Safe exploration is a key requirement for reinforcement learning (RL) agents to learn and adapt online, beyond controlled (e.g. simulated) environments. In this work, we tackle this challenge by utilizing suboptimal yet conservative policies (e.g., obtained from offline data or simulators) as priors. Our approach, SOOPER, uses probabilistic dynamics models to optimistically explore, yet pessimistically fall back to the conservative policy prior if needed. We prove that SOOPER guarantees safety throughout learning, and establish convergence to an optimal policy by bounding its cumulative regret. Extensive experiments on key safe RL benchmarks and real-world hardware demonstrate that SOOPER is scalable, outperforms the state-of-the-art and validate our theoretical guarantees in practice.

View on arXiv
Comments on this paper