23
1

Multi-step Greedy Reinforcement Learning Algorithms

Abstract

Multi-step greedy policies have been extensively used in model-based reinforcement learning (RL), both when a model of the environment is available (e.g.,~in the game of Go) and when it is learned. In this paper, we explore their benefits in model-free RL, when employed using multi-step dynamic programming algorithms: κ\kappa-Policy Iteration (κ\kappa-PI) and κ\kappa-Value Iteration (κ\kappa-VI). These methods iteratively compute the next policy (κ\kappa-PI) and value function (κ\kappa-VI) by solving a surrogate decision problem with a shaped reward and a smaller discount factor. We derive model-free RL algorithms based on κ\kappa-PI and κ\kappa-VI in which the surrogate problem can be solved by any discrete or continuous action RL method, such as DQN and TRPO. We identify the importance of a hyper-parameter that controls the extent to which the surrogate problem is solved and suggest a way to set this parameter. When evaluated on a range of Atari and MuJoCo benchmark tasks, our results indicate that for the right range of κ\kappa, our algorithms outperform DQN and TRPO. This shows that our multi-step greedy algorithms are general enough to be applied over any existing RL algorithm and can significantly improve its performance.

View on arXiv
Comments on this paper