An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints

Online safe reinforcement learning (RL) plays a key role in dynamic environments, with applications in autonomous driving, robotics, and cybersecurity. The objective is to learn optimal policies that maximize rewards while satisfying safety constraints modeled by constrained Markov decision processes (CMDPs). Existing methods achieve sublinear regret under stochastic constraints but often fail in adversarial settings, where constraints are unknown, time-varying, and potentially adversarially designed. In this paper, we propose the Optimistic Mirror Descent Primal-Dual (OMDPD) algorithm, the first to address online CMDPs with anytime adversarial constraints. OMDPD achieves optimal regret O(sqrt(K)) and strong constraint violation O(sqrt(K)) without relying on Slater's condition or the existence of a strictly known safe policy. We further show that access to accurate estimates of rewards and transitions can further improve these bounds. Our results offer practical guarantees for safe decision-making in adversarial environments.
View on arXiv@article{zhu2025_2505.21841, title={ An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints }, author={ Jiahui Zhu and Kihyun Yu and Dabeen Lee and Xin Liu and Honghao Wei }, journal={arXiv preprint arXiv:2505.21841}, year={ 2025 } }