ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1406.7447
33
18

Unimodal Bandits without Smoothness

28 June 2014
Richard Combes
Alexandre Proutiere
ArXivPDFHTML
Abstract

We consider stochastic bandit problems with a continuous set of arms and where the expected reward is a continuous and unimodal function of the arm. No further assumption is made regarding the smoothness and the structure of the expected reward function. For these problems, we propose the Stochastic Pentachotomy (SP) algorithm, and derive finite-time upper bounds on its regret and optimization error. In particular, we show that, for any expected reward function μ\muμ that behaves as μ(x)=μ(x⋆)−C∣x−x⋆∣ξ\mu(x)=\mu(x^\star)-C|x-x^\star|^\xiμ(x)=μ(x⋆)−C∣x−x⋆∣ξ locally around its maximizer x⋆x^\starx⋆ for some ξ,C>0\xi, C>0ξ,C>0, the SP algorithm is order-optimal. Namely its regret and optimization error scale as O(Tlog⁡(T))O(\sqrt{T\log(T)})O(Tlog(T)​) and O(log⁡(T)/T)O(\sqrt{\log(T)/T})O(log(T)/T​), respectively, when the time horizon TTT grows large. These scalings are achieved without the knowledge of ξ\xiξ and CCC. Our algorithm is based on asymptotically optimal sequential statistical tests used to successively trim an interval that contains the best arm with high probability. To our knowledge, the SP algorithm constitutes the first sequential arm selection rule that achieves a regret and optimization error scaling as O(T)O(\sqrt{T})O(T​) and O(1/T)O(1/\sqrt{T})O(1/T​), respectively, up to a logarithmic factor for non-smooth expected reward functions, as well as for smooth functions with unknown smoothness.

View on arXiv
Comments on this paper