135
5
v1v2 (latest)

A Method for Evaluating Hyperparameter Sensitivity in Reinforcement Learning

Abstract

The performance of modern reinforcement learning algorithms critically relies on tuning ever-increasing numbers of hyperparameters. Often, small changes in a hyperparameter can lead to drastic changes in performance, and different environments require very different hyperparameter settings to achieve state-of-the-art performance reported in the literature. We currently lack a scalable and widely accepted approach to characterizing these complex interactions. This work proposes a new empirical methodology for studying, comparing, and quantifying the sensitivity of an algorithm's performance to hyperparameter tuning for a given set of environments. We then demonstrate the utility of this methodology by assessing the hyperparameter sensitivity of several commonly used normalization variants of PPO. The results suggest that several algorithmic performance improvements may, in fact, be a result of an increased reliance on hyperparameter tuning.

View on arXiv
@article{adkins2025_2412.07165,
  title={ A Method for Evaluating Hyperparameter Sensitivity in Reinforcement Learning },
  author={ Jacob Adkins and Michael Bowling and Adam White },
  journal={arXiv preprint arXiv:2412.07165},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.