34
70

Stochastic Bandits with Linear Constraints

Abstract

We study a constrained contextual linear bandit setting, where the goal of the agent is to produce a sequence of policies, whose expected cumulative reward over the course of TT rounds is maximum, and each has an expected cost below a certain threshold τ\tau. We propose an upper-confidence bound algorithm for this problem, called optimistic pessimistic linear bandit (OPLB), and prove an O~(dTτc0)\widetilde{\mathcal{O}}(\frac{d\sqrt{T}}{\tau-c_0}) bound on its TT-round regret, where the denominator is the difference between the constraint threshold and the cost of a known feasible action. We further specialize our results to multi-armed bandits and propose a computationally efficient algorithm for this setting. We prove a regret bound of O~(KTτc0)\widetilde{\mathcal{O}}(\frac{\sqrt{KT}}{\tau - c_0}) for this algorithm in KK-armed bandits, which is a K\sqrt{K} improvement over the regret bound we obtain by simply casting multi-armed bandits as an instance of contextual linear bandits and using the regret bound of OPLB. We also prove a lower-bound for the problem studied in the paper and provide simulations to validate our theoretical results.

View on arXiv
Comments on this paper