ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.02969
13
1

A Simple and Optimal Policy Design with Safety against Heavy-tailed Risk for Stochastic Bandits

7 June 2022
D. Simchi-Levi
Zeyu Zheng
Feng Zhu
ArXivPDFHTML
Abstract

We study the stochastic multi-armed bandit problem and design new policies that enjoy both worst-case optimality for expected regret and light-tailed risk for regret distribution. Starting from the two-armed bandit setting with time horizon TTT, we propose a simple policy and prove that the policy (i) enjoys the worst-case optimality for the expected regret at order O(Tln⁡T)O(\sqrt{T\ln T})O(TlnT​) and (ii) has the worst-case tail probability of incurring a linear regret decay at an exponential rate exp⁡(−Ω(T))\exp(-\Omega(\sqrt{T}))exp(−Ω(T​)), a rate that we prove to be best achievable for all worst-case optimal policies. Briefly, our proposed policy achieves a delicate balance between doing more exploration at the beginning of the time horizon and doing more exploitation when approaching the end, compared to the standard Successive Elimination policy and Upper Confidence Bound policy. We then improve the policy design and analysis to work for the general KKK-armed bandit setting. Specifically, the worst-case probability of incurring a regret larger than any x>0x>0x>0 is upper bounded by exp⁡(−Ω(x/KT))\exp(-\Omega(x/\sqrt{KT}))exp(−Ω(x/KT​)). We then enhance the policy design to accommodate the "any-time" setting where TTT is not known a priori, and prove equivalently desired policy performances as compared to the "fixed-time" setting with known TTT. A brief account of numerical experiments is conducted to illustrate the theoretical findings. We conclude by extending our proposed policy design to the general stochastic linear bandit setting and proving that the policy leads to both worst-case optimality in terms of expected regret order and light-tailed risk on the regret distribution.

View on arXiv
Comments on this paper