15
9

Provably Safe Reinforcement Learning with Step-wise Violation Constraints

Abstract

In this paper, we investigate a novel safe reinforcement learning problem with step-wise violation constraints. Our problem differs from existing works in that we consider stricter step-wise violation constraints and do not assume the existence of safe actions, making our formulation more suitable for safety-critical applications which need to ensure safety in all decision steps and may not always possess safe actions, e.g., robot control and autonomous driving. We propose a novel algorithm SUCBVI, which guarantees O~(ST)\widetilde{O}(\sqrt{ST}) step-wise violation and O~(H3SAT)\widetilde{O}(\sqrt{H^3SAT}) regret. Lower bounds are provided to validate the optimality in both violation and regret performance with respect to SS and TT. Moreover, we further study a novel safe reward-free exploration problem with step-wise violation constraints. For this problem, we design an (ε,δ)(\varepsilon,\delta)-PAC algorithm SRF-UCRL, which achieves nearly state-of-the-art sample complexity O~((S2AH2ε+H4SAε2)(log(1δ)+S))\widetilde{O}((\frac{S^2AH^2}{\varepsilon}+\frac{H^4SA}{\varepsilon^2})(\log(\frac{1}{\delta})+S)), and guarantees O~(ST)\widetilde{O}(\sqrt{ST}) violation during the exploration. The experimental results demonstrate the superiority of our algorithms in safety performance, and corroborate our theoretical results.

View on arXiv
Comments on this paper