Variance-Dependent Regret Bounds for Non-stationary Linear Bandits

We investigate the non-stationary stochastic linear bandit problem where the reward distribution evolves each round. Existing algorithms characterize the non-stationarity by the total variation budget , which is the summation of the change of the consecutive feature vectors of the linear bandits over rounds. However, such a quantity only measures the non-stationarity with respect to the expectation of the reward distribution, which makes existing algorithms sub-optimal under the general non-stationary distribution setting. In this work, we propose algorithms that utilize the variance of the reward distribution as well as the , and show that they can achieve tighter regret upper bounds. Specifically, we introduce two novel algorithms: Restarted Weighted and Restarted . These algorithms address cases where the variance information of the rewards is known and unknown, respectively. Notably, when the total variance is much smaller than , our algorithms outperform previous state-of-the-art results on non-stationary stochastic linear bandits under different settings. Experimental evaluations further validate the superior performance of our proposed algorithms over existing works.
View on arXiv