ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.09278
16
0

Adaptive Regret for Bandits Made Possible: Two Queries Suffice

17 January 2024
Zhou Lu
Qiuyi Zhang
Xinyi Chen
Fred Zhang
David P. Woodruff
Elad Hazan
ArXivPDFHTML
Abstract

Fast changing states or volatile environments pose a significant challenge to online optimization, which needs to perform rapid adaptation under limited observation. In this paper, we give query and regret optimal bandit algorithms under the strict notion of strongly adaptive regret, which measures the maximum regret over any contiguous interval III. Due to its worst-case nature, there is an almost-linear Ω(∣I∣1−ϵ)\Omega(|I|^{1-\epsilon})Ω(∣I∣1−ϵ) regret lower bound, when only one query per round is allowed [Daniely el al, ICML 2015]. Surprisingly, with just two queries per round, we give Strongly Adaptive Bandit Learner (StABL) that achieves O~(n∣I∣)\tilde{O}(\sqrt{n|I|})O~(n∣I∣​) adaptive regret for multi-armed bandits with nnn arms. The bound is tight and cannot be improved in general. Our algorithm leverages a multiplicative update scheme of varying stepsizes and a carefully chosen observation distribution to control the variance. Furthermore, we extend our results and provide optimal algorithms in the bandit convex optimization setting. Finally, we empirically demonstrate the superior performance of our algorithms under volatile environments and for downstream tasks, such as algorithm selection for hyperparameter optimization.

View on arXiv
Comments on this paper