ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.05929
17
7

UCBoost: A Boosting Approach to Tame Complexity and Optimality for Stochastic Bandits

16 April 2018
Fang Liu
Sinong Wang
Swapna Buccapatnam
Ness B. Shroff
ArXivPDFHTML
Abstract

In this work, we address the open problem of finding low-complexity near-optimal multi-armed bandit algorithms for sequential decision making problems. Existing bandit algorithms are either sub-optimal and computationally simple (e.g., UCB1) or optimal and computationally complex (e.g., kl-UCB). We propose a boosting approach to Upper Confidence Bound based algorithms for stochastic bandits, that we call UCBoost. Specifically, we propose two types of UCBoost algorithms. We show that UCBoost(DDD) enjoys O(1)O(1)O(1) complexity for each arm per round as well as regret guarantee that is 1/e1/e1/e-close to that of the kl-UCB algorithm. We propose an approximation-based UCBoost algorithm, UCBoost(ϵ\epsilonϵ), that enjoys a regret guarantee ϵ\epsilonϵ-close to that of kl-UCB as well as O(log⁡(1/ϵ))O(\log(1/\epsilon))O(log(1/ϵ)) complexity for each arm per round. Hence, our algorithms provide practitioners a practical way to trade optimality with computational complexity. Finally, we present numerical results which show that UCBoost(ϵ\epsilonϵ) can achieve the same regret performance as the standard kl-UCB while incurring only 1%1\%1% of the computational cost of kl-UCB.

View on arXiv
Comments on this paper