224

A Self-Tuning Actor-Critic Algorithm

Abstract

Reinforcement learning algorithms are highly sensitive to the choice of hyperparameters, typically requiring significant manual effort to identify hyperparameters that perform well on a new domain. In this paper, we take a step towards addressing this issue by using metagradients to automatically adapt hyperparameters online by meta-gradient descent (Xu et al., 2018). We apply our algorithm, Self-Tuning Actor-Critic (STAC), to self-tune all the differentiable hyperparameters of an actor-critic loss function, to discover auxiliary tasks, and to improve off-policy learning using a novel leaky V-trace operator. STAC is simple to use, sample efficient and does not require a significant increase in compute. Ablative studies show that the overall performance of STAC improved as we adapt more hyperparameters. When applied to the Arcade Learning Environment (Bellemare et al. 2012), STAC improved the median human normalized score in 200200M steps from 243%243\% to 364%364\%. When applied to the DM Control suite (Tassa et al., 2018), STAC improved the mean score in 3030M steps from 217217 to 389389 when learning with features, from 108108 to 202202 when learning from pixels, and from 195195 to 295295 in the Real-World Reinforcement Learning Challenge (Dulac-Arnold et al., 2020).

View on arXiv
Comments on this paper