ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.00980
15
131

A New Algorithm for Non-stationary Contextual Bandits: Efficient, Optimal, and Parameter-free

3 February 2019
Yifang Chen
Chung-Wei Lee
Haipeng Luo
Chen-Yu Wei
ArXivPDFHTML
Abstract

We propose the first contextual bandit algorithm that is parameter-free, efficient, and optimal in terms of dynamic regret. Specifically, our algorithm achieves dynamic regret O(min⁡{ST,Δ13T23})\mathcal{O}(\min\{\sqrt{ST}, \Delta^{\frac{1}{3}}T^{\frac{2}{3}}\})O(min{ST​,Δ31​T32​}) for a contextual bandit problem with TTT rounds, SSS switches and Δ\DeltaΔ total variation in data distributions. Importantly, our algorithm is adaptive and does not need to know SSS or Δ\DeltaΔ ahead of time, and can be implemented efficiently assuming access to an ERM oracle. Our results strictly improve the O(min⁡{S14T34,Δ15T45})\mathcal{O}(\min \{S^{\frac{1}{4}}T^{\frac{3}{4}}, \Delta^{\frac{1}{5}}T^{\frac{4}{5}}\})O(min{S41​T43​,Δ51​T54​}) bound of (Luo et al., 2018), and greatly generalize and improve the O(ST)\mathcal{O}(\sqrt{ST})O(ST​) result of (Auer et al, 2018) that holds only for the two-armed bandit problem without contextual information. The key novelty of our algorithm is to introduce replay phases, in which the algorithm acts according to its previous decisions for a certain amount of time in order to detect non-stationarity while maintaining a good balance between exploration and exploitation.

View on arXiv
Comments on this paper