36
2
v1v2 (latest)

Goal-Directed Planning by Reinforcement Learning and Active Inference

Abstract

What is the difference between goal-directed and habitual behavior? We propose a novel computational framework of decision making with Bayesian inference, in which everything is integrated as an entire neural network model. The model learns to predict environmental state transitions by self-exploration and generating motor actions by sampling stochastic internal states z{z}. Habitual behavior, which is obtained from the prior distribution of z{z}, is acquired by reinforcement learning. Goal-directed behavior is determined from the posterior distribution of z{z} by planning, using active inference which optimizes the past, current and future z{z} by minimizing the variational free energy for the desired future observation constrained by the observed sensory sequence. We demonstrate the effectiveness of the proposed framework by experiments in a sensorimotor navigation task with camera observations and continuous motor actions.

View on arXiv
Comments on this paper