52
v1v2 (latest)

Reliability-Adjusted Prioritized Experience Replay

Leonard S. Pleiss
Tobias Sutter
Maximilian Schiffer
Main:9 Pages
6 Figures
Bibliography:2 Pages
Appendix:9 Pages
Abstract

Experience replay enables data-efficient learning from past experiences in online reinforcement learning agents. Traditionally, experiences were sampled uniformly from a replay buffer, regardless of differences in experience-specific learning potential. In an effort to sample more efficiently, researchers introduced Prioritized Experience Replay (PER). In this paper, we propose an extension to PER by introducing a novel measure of temporal difference error reliability. We theoretically show that the resulting transition selection algorithm, Reliability-adjusted Prioritized Experience Replay (ReaPER), enables more efficient learning than PER. We further present empirical results showing that ReaPER outperforms PER across various environment types, including the Atari-10 benchmark.

View on arXiv
Comments on this paper