We consider reinforcement learning (RL) in Markov Decision Processes in which an agent repeatedly interacts with an environment that is modeled by a controlled Markov process. At each time step , it earns a reward, and also incurs a cost-vector consisting of costs. We design model-based RL algorithms that maximize the cumulative reward earned over a time horizon of time-steps, while simultaneously ensuring that the average values of the cost expenditures are bounded by agent-specified thresholds . In order to measure the performance of a reinforcement learning algorithm that satisfies the average cost constraints, we define an dimensional regret vector that is composed of its reward regret, and cost regrets. The reward regret measures the sub-optimality in the cumulative reward, while the -th component of the cost regret vector is the difference between its -th cumulative cost expense and the expected cost expenditures . We prove that the expected value of the regret vector of UCRL-CMDP, is upper-bounded as , where is the time horizon. We further show how to reduce the regret of a desired subset of the costs, at the expense of increasing the regrets of rewards and the remaining costs. To the best of our knowledge, ours is the only work that considers non-episodic RL under average cost constraints, and derive algorithms that can~\emph{tune the regret vector} according to the agent's requirements on its cost regrets.
View on arXiv