Safe reinforcement learning is extremely challenging. Not only must the agent explore an unknown environment, it must do so while ensuring no safety constraint violations. The problem is typically posed as a constrained Markov decision process (MDP) under an unknown model, often with the learning agent having access to a safe suboptimal baseline policy. Recent results obtain an objective regret in episodes for an MDP with states, while being safe at all times. The main idea is to combine a reward bonus for exploration (optimism) with a conservative constraint (pessimism). However, the approach is so pessimistic that empirical results indicate an inordinately long learning process that keeps on applying sequences of the safe baseline policy. Our key insight is that such excessive pessimism hinders exploration, and needs to be combated by optimism with respect to the model. This insight yields DOPE, which has a double dose of optimism with respect to model and reward, while being pessimistic with respect to the constraints. We show that DOPE reduces the objective regret to with no constraint violation. Furthermore, we show in empirical studies that DOPE has a dramatic performance improvement as compared to earlier approaches.
View on arXiv