ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.07852
17
1

Accelerated Bayesian Optimization throughWeight-Prior Tuning

21 May 2018
A. Shilton
Sunil R. Gupta
Santu Rana
Pratibha Vellanki
Laurence Park
Cheng Li
Svetha Venkatesh
A. Sutti
David Rubin
T. Dorin
Alireza Vahid
Murray Height
Teo Slezak
ArXivPDFHTML
Abstract

Bayesian optimization (BO) is a widely-used method for optimizing expensive (to evaluate) problems. At the core of most BO methods is the modeling of the objective function using a Gaussian Process (GP) whose covariance is selected from a set of standard covariance functions. From a weight-space view, this models the objective as a linear function in a feature space implied by the given covariance K, with an arbitrary Gaussian weight prior w∼N(0,I){\bf w} \sim \mathcal{N} ({\bf 0}, {\bf I})w∼N(0,I). In many practical applications there is data available that has a similar (covariance) structure to the objective, but which, having different form, cannot be used directly in standard transfer learning. In this paper we show how such auxiliary data may be used to construct a GP covariance corresponding to a more appropriate weight prior for the objective function. Building on this, we show that we may accelerate BO by modeling the objective function using this (learned) weight prior, which we demonstrate on both test functions and a practical application to short-polymer fibre manufacture.

View on arXiv
Comments on this paper