ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24005
17
0

How far away are truly hyperparameter-free learning algorithms?

29 May 2025
Priya Kasimbeg
Vincent Roulet
Naman Agarwal
Sourabh Medapati
Fabian Pedregosa
Atish Agarwala
George E. Dahl
ArXiv (abs)PDFHTML
Main:13 Pages
9 Figures
Bibliography:4 Pages
10 Tables
Appendix:11 Pages
Abstract

Despite major advances in methodology, hyperparameter tuning remains a crucial (and expensive) part of the development of machine learning systems. Even ignoring architectural choices, deep neural networks have a large number of optimization and regularization hyperparameters that need to be tuned carefully per workload in order to obtain the best results. In a perfect world, training algorithms would not require workload-specific hyperparameter tuning, but would instead have default settings that performed well across many workloads. Recently, there has been a growing literature on optimization methods which attempt to reduce the number of hyperparameters -- particularly the learning rate and its accompanying schedule. Given these developments, how far away is the dream of neural network training algorithms that completely obviate the need for painful tuning?In this paper, we evaluate the potential of learning-rate-free methods as components of hyperparameter-free methods. We freeze their (non-learning rate) hyperparameters to default values, and score their performance using the recently-proposed AlgoPerf: Training Algorithms benchmark. We found that literature-supplied default settings performed poorly on the benchmark, so we performed a search for hyperparameter configurations that performed well across all workloads simultaneously. The best AlgoPerf-calibrated learning-rate-free methods had much improved performance but still lagged slightly behind a similarly calibrated NadamW baseline in overall benchmark score. Our results suggest that there is still much room for improvement for learning-rate-free methods, and that testing against a strong, workload-agnostic baseline is important to improve hyperparameter reduction techniques.

View on arXiv
@article{kasimbeg2025_2505.24005,
  title={ How far away are truly hyperparameter-free learning algorithms? },
  author={ Priya Kasimbeg and Vincent Roulet and Naman Agarwal and Sourabh Medapati and Fabian Pedregosa and Atish Agarwala and George E. Dahl },
  journal={arXiv preprint arXiv:2505.24005},
  year={ 2025 }
}
Comments on this paper