ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.10076
44
0

Towards Scalable Bayesian Optimization via Gradient-Informed Bayesian Neural Networks

14 April 2025
Georgios Makrygiorgos
Joshua Hang Sai Ip
Ali Mesbah
    BDL
ArXivPDFHTML
Abstract

Bayesian optimization (BO) is a widely used method for data-driven optimization that generally relies on zeroth-order data of objective function to construct probabilistic surrogate models. These surrogates guide the exploration-exploitation process toward finding global optimum. While Gaussian processes (GPs) are commonly employed as surrogates of the unknown objective function, recent studies have highlighted the potential of Bayesian neural networks (BNNs) as scalable and flexible alternatives. Moreover, incorporating gradient observations into GPs, when available, has been shown to improve BO performance. However, the use of gradients within BNN surrogates remains unexplored. By leveraging automatic differentiation, gradient information can be seamlessly integrated into BNN training, resulting in more informative surrogates for BO. We propose a gradient-informed loss function for BNN training, effectively augmenting function observations with local gradient information. The effectiveness of this approach is demonstrated on well-known benchmarks in terms of improved BNN predictions and faster BO convergence as the number of decision variables increases.

View on arXiv
@article{makrygiorgos2025_2504.10076,
  title={ Towards Scalable Bayesian Optimization via Gradient-Informed Bayesian Neural Networks },
  author={ Georgios Makrygiorgos and Joshua Hang Sai Ip and Ali Mesbah },
  journal={arXiv preprint arXiv:2504.10076},
  year={ 2025 }
}
Comments on this paper