ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.07674
26
11

An Empirical Study of the Effectiveness of Using a Replay Buffer on Mode Discovery in GFlowNets

15 July 2023
Nikhil Vemgal
Elaine Lau
Doina Precup
ArXivPDFHTML
Abstract

Reinforcement Learning (RL) algorithms aim to learn an optimal policy by iteratively sampling actions to learn how to maximize the total expected return, R(x)R(x)R(x). GFlowNets are a special class of algorithms designed to generate diverse candidates, xxx, from a discrete set, by learning a policy that approximates the proportional sampling of R(x)R(x)R(x). GFlowNets exhibit improved mode discovery compared to conventional RL algorithms, which is very useful for applications such as drug discovery and combinatorial search. However, since GFlowNets are a relatively recent class of algorithms, many techniques which are useful in RL have not yet been associated with them. In this paper, we study the utilization of a replay buffer for GFlowNets. We explore empirically various replay buffer sampling techniques and assess the impact on the speed of mode discovery and the quality of the modes discovered. Our experimental results in the Hypergrid toy domain and a molecule synthesis environment demonstrate significant improvements in mode discovery when training with a replay buffer, compared to training only with trajectories generated on-policy.

View on arXiv
Comments on this paper