256

Inverse Reinforcement Learning via Deep Gaussian Process

Abstract

The report proposes a new approach for inverse reinforcement learning based on deep Gaussian process (GP), which is capable of learning complicated reward structures with few demonstrations. The model stacks multiple latent GP layers to learn abstract representations of the state feature space, which is linked to the demonstrations through the Maximum Entropy learning framework. As analytic derivation of the model evidence is prohibitive due to the nonlinearity of latent variables, variational inference is employed for approximate inference, based on a special choice of variational distributions. This guards the model from over training, achieving the Automatic Occam's Razor. Experiments on the benchmark test, i.e., object world, as well as a new setup, i.e., binary world, are performed, where the proposed method outperforms state-of-the-art approaches.

View on arXiv
Comments on this paper