5
0

A Provable Approach for End-to-End Safe Reinforcement Learning

Abstract

A longstanding goal in safe reinforcement learning (RL) is a method to ensure the safety of a policy throughout the entire process, from learning to operation. However, existing safe RL paradigms inherently struggle to achieve this objective. We propose a method, called Provably Lifetime Safe RL (PLS), that integrates offline safe RL with safe policy deployment to address this challenge. Our proposed method learns a policy offline using return-conditioned supervised learning and then deploys the resulting policy while cautiously optimizing a limited set of parameters, known as target returns, using Gaussian processes (GPs). Theoretically, we justify the use of GPs by analyzing the mathematical relationship between target and actual returns. We then prove that PLS finds near-optimal target returns while guaranteeing safety with high probability. Empirically, we demonstrate that PLS outperforms baselines both in safety and reward performance, thereby achieving the longstanding goal to obtain high rewards while ensuring the safety of a policy throughout the lifetime from learning to operation.

View on arXiv
@article{wachi2025_2505.21852,
  title={ A Provable Approach for End-to-End Safe Reinforcement Learning },
  author={ Akifumi Wachi and Kohei Miyaguchi and Takumi Tanabe and Rei Sato and Youhei Akimoto },
  journal={arXiv preprint arXiv:2505.21852},
  year={ 2025 }
}
Comments on this paper