On learning Whittle index policy for restless bandits with scalable regret

Reinforcement learning is an attractive approach to learn good resource allocation and scheduling policies based on data when the system model is unknown. However, the cumulative regret of most RL algorithms scales as , where is the size of the state space, is the size of the action space, is the horizon, and the notation hides logarithmic terms. Due to the linear dependence on the size of the state space, these regret bounds are prohibitively large for resource allocation and scheduling problems. In this paper, we present a model-based RL algorithm for such problems which has scalable regret. In particular, we consider a restless bandit model, and propose a Thompson-sampling based learning algorithm which is tuned to the underlying structure of the model. We present two characterizations of the regret of the proposed algorithm with respect to the Whittle index policy. First, we show that for a restless bandit with arms and at most activations at each time, the regret scales either as or depending on the reward model. Second, under an additional technical assumption, we show that the regret scales as or . We present numerical examples to illustrate the salient features of the algorithm.
View on arXiv