Synthesizing Safe Policies under Probabilistic Constraints with Reinforcement Learning and Bayesian Model Checking

Abstract
We propose to leverage epistemic uncertainty about constraint satisfaction of a reinforcement learner in safety critical domains. We introduce a framework for specification of requirements for reinforcement learners in constrained settings, including confidence about results. We show that an agent's confidence in constraint satisfaction provides a useful signal for balancing optimization and safety in the learning process.
View on arXivComments on this paper