ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.03740
77
2
v1v2v3v4v5 (latest)

Harnessing Low-Fidelity Data to Accelerate Bayesian Optimization via Posterior Regularization

11 February 2019
B. Liu
    UQCV
ArXiv (abs)PDFHTML
Abstract

Bayesian optimization (BO) is a powerful derivative-free technique for global optimization of expensive black-box objective functions (BOFs). However, the overhead of BO can still be prohibitive if the number of allowed function evaluations is less than required. In this paper, we investigate how to reduce the required number of function evaluations for BO without compromise in solution quality. We explore the idea of posterior regularization for harnessing low fidelity (LF) data within the Gaussian process upper confidence bound (GP-UCB) framework. The LF data are assumed to arise from previous evaluations of a LF approximation of the BOF. An extra GP expert called LF-GP is trained to fit the LF data. We develop a dynamic weighted product of experts (DW-POE) fusion operator. The regularization is induced from this fusion operator on the posterior of the BOF. The impact of the LF-GP expert on the resulting regularized posterior is adaptively adjusted via Bayesian formulism. Extensive experimental results on benchmark BOF optimization tasks demonstrate the superior performance of the proposed algorithm over state-of-the-arts.

View on arXiv
Comments on this paper