ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.15332
13
22

Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark

29 March 2021
Sharada Mohanty
Jyotish Poonganam
Adrien Gaidon
Andrey Kolobov
Blake Wulfe
Dipam Chakraborty
Gravzvydas vSemetulskis
João Schapke
J. Kubilius
Jurgis Pavsukonis
Linas Klimas
Matthew J. Hausknecht
Patrick MacAlpine
Quang Nhat Tran
Thomas Tumiel
Xiaocheng Tang
Xinwei Chen
Christopher Hesse
Jacob Hilton
William H. Guss
Sahika Genc
John Schulman
K. Cobbe
ArXivPDFHTML
Abstract

The NeurIPS 2020 Procgen Competition was designed as a centralized benchmark with clearly defined tasks for measuring Sample Efficiency and Generalization in Reinforcement Learning. Generalization remains one of the most fundamental challenges in deep reinforcement learning, and yet we do not have enough benchmarks to measure the progress of the community on Generalization in Reinforcement Learning. We present the design of a centralized benchmark for Reinforcement Learning which can help measure Sample Efficiency and Generalization in Reinforcement Learning by doing end to end evaluation of the training and rollout phases of thousands of user submitted code bases in a scalable way. We designed the benchmark on top of the already existing Procgen Benchmark by defining clear tasks and standardizing the end to end evaluation setups. The design aims to maximize the flexibility available for researchers who wish to design future iterations of such benchmarks, and yet imposes necessary practical constraints to allow for a system like this to scale. This paper presents the competition setup and the details and analysis of the top solutions identified through this setup in context of 2020 iteration of the competition at NeurIPS.

View on arXiv
Comments on this paper