536

Model-Free Algorithm and Regret Analysis for MDPs with Peak Constraints

Journal of machine learning research (JMLR), 2020
Abstract

In the optimization of dynamic systems, the variables typically have constraints. Such problems can be modeled as a constrained Markov Decision Process (MDP). This paper considers a model-free approach to the problem, where the transition probabilities are not known. In the presence of peak constraints, the agent has to choose the policy to maximize the long-term average reward as well as satisfy the constraints at each time. We propose a novel algorithm that coverts the constrained problem to an unconstrained problem using a modification of the reward function, and a Q-learming based approach is used on the unconstrained problem. The proposed algorithm is shown to achieve O(H4SAT)O(\sqrt{H^4SAT\ell}) bound for both the obtained reward and constraint violations with probability at-least 12p1-2p, where TT is the time-horizon, AA is the number of actions, SS is the number of states, HH is the number of steps in each episode, and =log(2SATp)\ell=\log(\frac{2SAT}{p}). We note that these are the first results on regret analysis for constrained MDP, where the transition problems are not known apriori. We demonstrate the proposed algorithm on an energy harvesting problem where it outperforms state-of-the-art and performs close to the theoretical upper bound of the studied optimization problem.

View on arXiv
Comments on this paper