28
0

Hybrid Adversarial Inverse Reinforcement Learning

Abstract

Learning from demonstrations then outperform the demonstrator is the advanced target of the inverse reinforcement learning (IRL), which is entitled as beyond-demonstrator (BD)-IRL. The BD-IRL provides an entirely new method to build expert systems, which gets rid of the dilemma of reward function design and reduces the computation costs. Currently, most of the BD-IRL algorithms are two-stage, it first infer a reward function then learn the policy via reinforcement learning (RL). Because of the two separate procedures, the two-stage algorithms have high computation complexity and low robustness. To overcome these flaw, we propose a BD-IRL framework entitled hybrid adversarial inverse reinforcement learning (HAIRL), which successfully integrates the reward learning and exploration into one procedure. The simulation results show that the HAIRL is more efficient and robust when compared with other similar state-of-the-art (SOTA) algorithms.

View on arXiv
Comments on this paper