On Generalization and Distributional Update for Mimicking Observations with Adequate Exploration

Abstract
This paper tackles the efficiency and stability issues in learning from observations (LfO). We commence by investigating how reward functions and policies generalize in LfO. Subsequently, the built-in reinforcement learning (RL) approach in generative adversarial imitation from observation (GAIfO) is replaced with distributional soft actor-critic (DSAC). This change results in a novel algorithm called Mimicking Observations through Distributional Update Learning with adequate Exploration (MODULE), which combines soft actor-critic's superior efficiency with distributional RL's robust stability.
View on arXivComments on this paper