535

Model-Free Algorithm and Regret Analysis for MDPs with Peak Constraints

Journal of machine learning research (JMLR), 2020
Abstract

In the optimization of dynamic systems, the variables typically have constraints. Such problems can be modeled as a Constrained Markov Decision Process (CMDP). This paper considers the peak constraints, where the agent chooses the policy to maximize the long-term average reward as well as satisfies the constraints at each time. We propose a model-free algorithm that converts CMDP problem to an unconstrained problem and a Q-learning based approach is used. The proposed algorithm achieves O~(T12+ϵH4SA)\tilde{O}(T^{\frac{1}{2}+\epsilon}\sqrt{H^4SA}) bound for the regret and O(HT12+ϵ)O(HT^{\frac{1}{2}+\epsilon}) bound for the number of constraint violations where ϵ>0\epsilon>0 is an arbitrary positive number, TT is the time-horizon, SS and AA is the number of states and actions, respectively, and HH is the number of steps per episode. We note that this is the first results on regret analysis for CMDP with peak constraints, where the transition problems are not known apriori. We demonstrate the proposed algorithm on an energy harvesting problem where it outperforms state-of-the-art and performs close to the theoretical upper bound of the studied optimization problem.

View on arXiv
Comments on this paper