ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.06532
65
22
v1v2v3 (latest)

A New Look at Dynamic Regret for Non-Stationary Stochastic Bandits

17 January 2022
Yasin Abbasi-Yadkori
András Gyorgy
N. Lazić
ArXiv (abs)PDFHTML
Abstract

We study the non-stationary stochastic multi-armed bandit problem, where the reward statistics of each arm may change several times during the course of learning. The performance of a learning algorithm is evaluated in terms of their dynamic regret, which is defined as the difference of the expected cumulative reward of an agent choosing the optimal arm in every round and the cumulative reward of the learning algorithm. One way to measure the hardness of such environments is to consider how many times the identity of the optimal arm can change. We propose a method that achieves, in KKK-armed bandit problems, a near-optimal O~(KN(S+1))\widetilde O(\sqrt{K N(S+1)})O(KN(S+1)​) dynamic regret, where NNN is the number of rounds and SSS is the number of times the identity of the optimal arm changes, without prior knowledge of SSS and NNN. Previous works for this problem obtain regret bounds that scale with the number of changes (or the amount of change) in the reward functions, which can be much larger, or assume prior knowledge of SSS to achieve similar bounds.

View on arXiv
Comments on this paper