11
3

Bayesian Optimistic Optimisation with Exponentially Decaying Regret

Abstract

Bayesian optimisation (BO) is a well-known efficient algorithm for finding the global optimum of expensive, black-box functions. The current practical BO algorithms have regret bounds ranging from O(logNN)\mathcal{O}(\frac{logN}{\sqrt{N}}) to O(eN)\mathcal O(e^{-\sqrt{N}}), where NN is the number of evaluations. This paper explores the possibility of improving the regret bound in the noiseless setting by intertwining concepts from BO and tree-based optimistic optimisation which are based on partitioning the search space. We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order O(NN)\mathcal O(N^{-\sqrt{N}}) under the assumption that the objective function is sampled from a Gaussian process with a Mat\érn kernel with smoothness parameter ν>4+D2\nu > 4 +\frac{D}{2}, where DD is the number of dimensions. We perform experiments on optimisation of various synthetic functions and machine learning hyperparameter tuning tasks and show that our algorithm outperforms baselines.

View on arXiv
Comments on this paper