Towards Theoretical Understandings of Robust Markov Decision Processes: Sample Complexity and Asymptotics

In this paper, we study the non-asymptotic and asymptotic performances of the optimal robust policy and value function of robust Markov Decision Processes(MDPs), where the optimal robust policy and value function are solved only from a generative model. While prior work focusing on non-asymptotic performances of robust MDPs is restricted in the setting of the KL uncertainty set and -rectangular assumption, we improve their results and also consider other uncertainty sets, including and balls. Our results show that when we assume -rectangular on uncertainty sets, the sample complexity is about . In addition, we extend our results from -rectangular assumption to -rectangular assumption. In this scenario, the sample complexity varies with the choice of uncertainty sets and is generally larger than the case under -rectangular assumption. Moreover, we also show that the optimal robust value function is asymptotic normal with a typical rate under and -rectangular assumptions from both theoretical and empirical perspectives.
View on arXiv