Provably Safe Reinforcement Learning with Step-wise Violation Constraints

In this paper, we investigate a novel safe reinforcement learning problem with step-wise violation constraints. Our problem differs from existing works in that we consider stricter step-wise violation constraints and do not assume the existence of safe actions, making our formulation more suitable for safety-critical applications which need to ensure safety in all decision steps and may not always possess safe actions, e.g., robot control and autonomous driving. We propose a novel algorithm SUCBVI, which guarantees step-wise violation and regret. Lower bounds are provided to validate the optimality in both violation and regret performance with respect to and . Moreover, we further study a novel safe reward-free exploration problem with step-wise violation constraints. For this problem, we design an -PAC algorithm SRF-UCRL, which achieves nearly state-of-the-art sample complexity , and guarantees violation during the exploration. The experimental results demonstrate the superiority of our algorithms in safety performance, and corroborate our theoretical results.
View on arXiv