ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.05806
20
55

High-Dimensional Experimental Design and Kernel Bandits

12 May 2021
Romain Camilleri
Julian Katz-Samuels
Kevin G. Jamieson
ArXivPDFHTML
Abstract

In recent years methods from optimal linear experimental design have been leveraged to obtain state of the art results for linear bandits. A design returned from an objective such as GGG-optimal design is actually a probability distribution over a pool of potential measurement vectors. Consequently, one nuisance of the approach is the task of converting this continuous probability distribution into a discrete assignment of NNN measurements. While sophisticated rounding techniques have been proposed, in ddd dimensions they require NNN to be at least ddd, dlog⁡(log⁡(d))d \log(\log(d))dlog(log(d)), or d2d^2d2 based on the sub-optimality of the solution. In this paper we are interested in settings where NNN may be much less than ddd, such as in experimental design in an RKHS where ddd may be effectively infinite. In this work, we propose a rounding procedure that frees NNN of any dependence on the dimension ddd, while achieving nearly the same performance guarantees of existing rounding procedures. We evaluate the procedure against a baseline that projects the problem to a lower dimensional space and performs rounding which requires NNN to just be at least a notion of the effective dimension. We also leverage our new approach in a new algorithm for kernelized bandits to obtain state of the art results for regret minimization and pure exploration. An advantage of our approach over existing UCB-like approaches is that our kernel bandit algorithms are also robust to model misspecification.

View on arXiv
Comments on this paper