ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.06834
  4. Cited By
Hyper-Tune: Towards Efficient Hyper-parameter Tuning at Scale

Hyper-Tune: Towards Efficient Hyper-parameter Tuning at Scale

18 January 2022
Yang Li
Yu Shen
Huaijun Jiang
Wentao Zhang
Jixiang Li
Ji Liu
Ce Zhang
Bin Cui
ArXivPDFHTML

Papers citing "Hyper-Tune: Towards Efficient Hyper-parameter Tuning at Scale"

5 / 5 papers shown
Title
Regularized boosting with an increasing coefficient magnitude stop
  criterion as meta-learner in hyperparameter optimization stacking ensemble
Regularized boosting with an increasing coefficient magnitude stop criterion as meta-learner in hyperparameter optimization stacking ensemble
Laura Fdez-Díaz
J. R. Quevedo
E. Montañés
27
3
0
02 Feb 2024
Transfer Learning for Bayesian Optimization: A Survey
Transfer Learning for Bayesian Optimization: A Survey
Tianyi Bai
Yang Li
Yu Shen
Xinyi Zhang
Wentao Zhang
Bin Cui
BDL
27
29
0
12 Feb 2023
TransBO: Hyperparameter Optimization via Two-Phase Transfer Learning
TransBO: Hyperparameter Optimization via Two-Phase Transfer Learning
Yang Li
Yu Shen
Huaijun Jiang
Wentao Zhang
Zhi-Xin Yang
Ce Zhang
Bin Cui
17
15
0
06 Jun 2022
Transfer Learning based Search Space Design for Hyperparameter Tuning
Transfer Learning based Search Space Design for Hyperparameter Tuning
Yang Li
Yu Shen
Huaijun Jiang
Tianyi Bai
Wentao Zhang
Ce Zhang
Bin Cui
20
13
0
06 Jun 2022
ProxyBO: Accelerating Neural Architecture Search via Bayesian
  Optimization with Zero-cost Proxies
ProxyBO: Accelerating Neural Architecture Search via Bayesian Optimization with Zero-cost Proxies
Yu Shen
Yang Li
Jian Zheng
Wentao Zhang
Peng Yao
Jixiang Li
Sen Yang
Ji Liu
Cui Bin
AI4CE
32
30
0
20 Oct 2021
1