37
5

A Lower Bound for the Sample Complexity of Inverse Reinforcement Learning

Abstract

Inverse reinforcement learning (IRL) is the task of finding a reward function that generates a desired optimal policy for a given Markov Decision Process (MDP). This paper develops an information-theoretic lower bound for the sample complexity of the finite state, finite action IRL problem. A geometric construction of β\beta-strict separable IRL problems using spherical codes is considered. Properties of the ensemble size as well as the Kullback-Leibler divergence between the generated trajectories are derived. The resulting ensemble is then used along with Fano's inequality to derive a sample complexity lower bound of O(nlogn)O(n \log n), where nn is the number of states in the MDP.

View on arXiv
Comments on this paper