187

Maximizing the Total Reward via Reward Tweaking

Abstract

In reinforcement learning, the discount factor γ\gamma controls the agent's effective planning horizon. Traditionally, this parameter was considered part of the MDP; however, as deep reinforcement learning algorithms tend to become unstable when the effective planning horizon is long, recent works refer to γ\gamma as a hyper-parameter. In this work, we focus on the finite-horizon setting and introduce \emph{reward tweaking}. Reward tweaking learns a surrogate reward function r~\tilde r for the discounted setting, which induces an optimal (undiscounted) return in the original finite-horizon task. Theoretically, we show that there exists a surrogate reward which leads to optimality in the original task and discuss the robustness of our approach. Additionally, we perform experiments in a high-dimensional continuous control task and show that reward tweaking guides the agent towards better long-horizon returns when it plans for short horizons using the tweaked reward.

View on arXiv
Comments on this paper