ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.03264
29
2

No-Regret Algorithms for Safe Bayesian Optimization with Monotonicity Constraints

5 June 2024
Arpan Losalka
Jonathan Scarlett
ArXivPDFHTML
Abstract

We consider the problem of sequentially maximizing an unknown function fff over a set of actions of the form (s,x)(s,\mathbf{x})(s,x), where the selected actions must satisfy a safety constraint with respect to an unknown safety function ggg. We model fff and ggg as lying in a reproducing kernel Hilbert space (RKHS), which facilitates the use of Gaussian process methods. While existing works for this setting have provided algorithms that are guaranteed to identify a near-optimal safe action, the problem of attaining low cumulative regret has remained largely unexplored, with a key challenge being that expanding the safe region can incur high regret. To address this challenge, we show that if ggg is monotone with respect to just the single variable sss (with no such constraint on fff), sublinear regret becomes achievable with our proposed algorithm. In addition, we show that a modified version of our algorithm is able to attain sublinear regret (for suitably defined notions of regret) for the task of finding a near-optimal sss corresponding to every x\mathbf{x}x, as opposed to only finding the global safe optimum. Our findings are supported with empirical evaluations on various objective and safety functions.

View on arXiv
Comments on this paper