425
v1v2 (latest)

Smooth Non-Stationary Bandits

International Conference on Machine Learning (ICML), 2023
Main:21 Pages
8 Figures
Bibliography:3 Pages
Appendix:16 Pages
Abstract

In many applications of online decision making, the environment is non-stationary and it is therefore crucial to use bandit algorithms that handle changes. Most existing approaches are designed to protect against non-smooth changes, constrained only by total variation or Lipschitzness over time, where they guarantee Θ~(T2/3)\tilde \Theta(T^{2/3}) regret. However, in practice environments are often changing {\bf smoothly}, so such algorithms may incur higher-than-necessary regret in these settings and do not leverage information on the rate of change. We study a non-stationary two-armed bandits problem where we assume that an arm's mean reward is a β\beta-H\"older function over (normalized) time, meaning it is (β1)(\beta-1)-times Lipschitz-continuously differentiable. We show the first separation between the smooth and non-smooth regimes by presenting a policy with O~(T3/5)\tilde O(T^{3/5}) regret for β=2\beta=2. We complement this result by an \Omg(T(β+1)/(2β+1))\Omg(T^{(\beta+1)/(2\beta+1)}) lower bound for any integer β1\beta\ge 1, which matches our upper bound for β=2\beta=2.

View on arXiv
Comments on this paper