ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.12034
54
18

Hyperparameter Selection for Imitation Learning

25 May 2021
Léonard Hussenot
Marcin Andrychowicz
Damien Vincent
Robert Dadashi
Anton Raichuk
Lukasz Stafiniak
Sertan Girgin
Raphaël Marinier
Nikola Momchev
Sabela Ramos
Manu Orsini
Olivier Bachem
M. Geist
Olivier Pietquin
ArXivPDFHTML
Abstract

We address the issue of tuning hyperparameters (HPs) for imitation learning algorithms in the context of continuous-control, when the underlying reward function of the demonstrating expert cannot be observed at any time. The vast literature in imitation learning mostly considers this reward function to be available for HP selection, but this is not a realistic setting. Indeed, would this reward function be available, it could then directly be used for policy training and imitation would not be necessary. To tackle this mostly ignored problem, we propose a number of possible proxies to the external reward. We evaluate them in an extensive empirical study (more than 10'000 agents across 9 environments) and make practical recommendations for selecting HPs. Our results show that while imitation learning algorithms are sensitive to HP choices, it is often possible to select good enough HPs through a proxy to the reward function.

View on arXiv
Comments on this paper