ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.01768
14
22

Nearly Optimal Algorithms for Level Set Estimation

2 November 2021
Blake Mason
Romain Camilleri
Subhojyoti Mukherjee
Kevin G. Jamieson
Robert D. Nowak
Lalit P. Jain
ArXivPDFHTML
Abstract

The level set estimation problem seeks to find all points in a domain X{\cal X}X where the value of an unknown function f:X→Rf:{\cal X}\rightarrow \mathbb{R}f:X→R exceeds a threshold α\alphaα. The estimation is based on noisy function evaluations that may be acquired at sequentially and adaptively chosen locations in X{\cal X}X. The threshold value α\alphaα can either be \emph{explicit} and provided a priori, or \emph{implicit} and defined relative to the optimal function value, i.e. α=(1−ϵ)f(x∗)\alpha = (1-\epsilon)f(x_\ast)α=(1−ϵ)f(x∗​) for a given ϵ>0\epsilon > 0ϵ>0 where f(x∗)f(x_\ast)f(x∗​) is the maximal function value and is unknown. In this work we provide a new approach to the level set estimation problem by relating it to recent adaptive experimental design methods for linear bandits in the Reproducing Kernel Hilbert Space (RKHS) setting. We assume that fff can be approximated by a function in the RKHS up to an unknown misspecification and provide novel algorithms for both the implicit and explicit cases in this setting with strong theoretical guarantees. Moreover, in the linear (kernel) setting, we show that our bounds are nearly optimal, namely, our upper bounds match existing lower bounds for threshold linear bandits. To our knowledge this work provides the first instance-dependent, non-asymptotic upper bounds on sample complexity of level-set estimation that match information theoretic lower bounds.

View on arXiv
Comments on this paper