24
39

On the Performance of Thompson Sampling on Logistic Bandits

Abstract

We study the logistic bandit, in which rewards are binary with success probability exp(βaθ)/(1+exp(βaθ))\exp(\beta a^\top \theta) / (1 + \exp(\beta a^\top \theta)) and actions aa and coefficients θ\theta are within the dd-dimensional unit ball. While prior regret bounds for algorithms that address the logistic bandit exhibit exponential dependence on the slope parameter β\beta, we establish a regret bound for Thompson sampling that is independent of β\beta. Specifically, we establish that, when the set of feasible actions is identical to the set of possible coefficient vectors, the Bayesian regret of Thompson sampling is O~(dT)\tilde{O}(d\sqrt{T}). We also establish a O~(dηT/λ)\tilde{O}(\sqrt{d\eta T}/\lambda) bound that applies more broadly, where λ\lambda is the worst-case optimal log-odds and η\eta is the "fragility dimension," a new statistic we define to capture the degree to which an optimal action for one model fails to satisfice for others. We demonstrate that the fragility dimension plays an essential role by showing that, for any ϵ>0\epsilon > 0, no algorithm can achieve poly(d,1/λ)T1ϵ\mathrm{poly}(d, 1/\lambda)\cdot T^{1-\epsilon} regret.

View on arXiv
Comments on this paper