Practitioner Motives to Use Different Hyperparameter Optimization Methods

Programmatic hyperparameter optimization (HPO) methods, such as Bayesian optimization and evolutionary algorithms, are highly sample-efficient in identifying optimal hyperparameter configurations for machine learning (ML) models. However, practitioners frequently use less efficient methods, such as grid search, which can lead to under-optimized models. We suspect this behavior is driven by a range of practitioner-specific motives. Practitioner motives, however, still need to be clarified to enhance user-centered development of HPO tools. To uncover practitioner motives to use different HPO methods, we conducted 20 semi-structured interviews and an online survey with 49 ML experts. By presenting main goals (e.g., increase ML model understanding) and contextual factors affecting practitioners' selection of HPO methods (e.g., available computer resources), this study offers a conceptual foundation to better understand why practitioners use different HPO methods, supporting development of more user-centered and context-adaptive HPO tools in automated ML.
View on arXiv@article{kannengießer2025_2203.01717, title={ Practitioner Motives to Use Different Hyperparameter Optimization Methods }, author={ Niclas Kannengießer and Niklas Hasebrook and Felix Morsbach and Marc-André Zöller and Jörg Franke and Marius Lindauer and Frank Hutter and Ali Sunyaev }, journal={arXiv preprint arXiv:2203.01717}, year={ 2025 } }