ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.06907
13
1

Improving Experience Replay through Modeling of Similar Transitions' Sets

12 November 2021
Daniel Eugênio Neves
João Pedro Oliveira Batisteli
Eduardo Felipe Lopes
Lucila Ishitani
Zenilton K. G. Patrocínio
    OffRL
ArXivPDFHTML
Abstract

In this work, we propose and evaluate a new reinforcement learning method, COMPact Experience Replay (COMPER), which uses temporal difference learning with predicted target values based on recurrence over sets of similar transitions, and a new approach for experience replay based on two transitions memories. Our objective is to reduce the required number of experiences to agent training regarding the total accumulated rewarding in the long run. Its relevance to reinforcement learning is related to the small number of observations that it needs to achieve results similar to that obtained by relevant methods in the literature, that generally demand millions of video frames to train an agent on the Atari 2600 games. We report detailed results from five training trials of COMPER for just 100,000 frames and about 25,000 iterations with a small experiences memory on eight challenging games of Arcade Learning Environment (ALE). We also present results for a DQN agent with the same experimental protocol on the same games set as the baseline. To verify the performance of COMPER on approximating a good policy from a smaller number of observations, we also compare its results with that obtained from millions of frames presented on the benchmark of ALE.

View on arXiv
Comments on this paper