60

Joint Learning of Hierarchical Neural Options and Abstract World Model

Wasu Top Piriyakulkij
Wolfgang Lehrach
Kevin Ellis
Kevin Murphy
Main:8 Pages
9 Figures
Bibliography:4 Pages
10 Tables
Appendix:10 Pages
Abstract

Building agents that can perform new skills by composing existing skills is a long-standing goal of AI agent research. Towards this end, we investigate how to efficiently acquire a sequence of skills, formalized as hierarchical neural options. However, existing model-free hierarchical reinforcement algorithms need a lot of data. We propose a novel method, which we call AgentOWL (Option and World model Learning Agent), that jointly learns -- in a sample efficient way -- an abstract world model (abstracting across both states and time) and a set of hierarchical neural options. We show, on a subset of Object-Centric Atari games, that our method can learn more skills using much less data than baseline methods.

View on arXiv
Comments on this paper