52
1

Embedding Safety into RL: A New Take on Trust Region Methods

Abstract

Reinforcement Learning (RL) agents can solve diverse tasks but often exhibit unsafe behavior. Constrained Markov Decision Processes (CMDPs) address this by enforcing safety constraints, yet existing methods either sacrifice reward maximization or allow unsafe training. We introduce Constrained Trust Region Policy Optimization (C-TRPO), which reshapes the policy space geometry to ensure trust regions contain only safe policies, guaranteeing constraint satisfaction throughout training. We analyze its theoretical properties and connections to TRPO, Natural Policy Gradient (NPG), and Constrained Policy Optimization (CPO). Experiments show that C-TRPO reduces constraint violations while maintaining competitive returns.

View on arXiv
@article{milosevic2025_2411.02957,
  title={ Embedding Safety into RL: A New Take on Trust Region Methods },
  author={ Nikola Milosevic and Johannes Müller and Nico Scherf },
  journal={arXiv preprint arXiv:2411.02957},
  year={ 2025 }
}
Comments on this paper