27
0

Online Episodic Convex Reinforcement Learning

Abstract

We study online learning in episodic finite-horizon Markov decision processes (MDPs) with convex objective functions, known as the concave utility reinforcement learning (CURL) problem. This setting generalizes RL from linear to convex losses on the state-action distribution induced by the agent's policy. The non-linearity of CURL invalidates classical Bellman equations and requires new algorithmic approaches. We introduce the first algorithm achieving near-optimal regret bounds for online CURL without any prior knowledge on the transition function. To achieve this, we use an online mirror descent algorithm with varying constraint sets and a carefully designed exploration bonus. We then address for the first time a bandit version of CURL, where the only feedback is the value of the objective function on the state-action distribution induced by the agent's policy. We achieve a sub-linear regret bound for this more challenging problem by adapting techniques from bandit convex optimization to the MDP setting.

View on arXiv
@article{moreno2025_2505.07303,
  title={ Online Episodic Convex Reinforcement Learning },
  author={ Bianca Marin Moreno and Khaled Eldowa and Pierre Gaillard and Margaux Brégère and Nadia Oudjane },
  journal={arXiv preprint arXiv:2505.07303},
  year={ 2025 }
}
Comments on this paper