70
0
v1v2 (latest)

Constrained Linear Thompson Sampling

Main:12 Pages
10 Figures
Bibliography:3 Pages
4 Tables
Appendix:28 Pages
Abstract

We study safe linear bandits (SLBs), where an agent selects actions from a convex set to maximize an unknown linear objective subject to unknown linear constraints in each round. Existing methods for SLBs provide strong regret guarantees, but require solving expensive optimization problems (e.g., second-order cones, NP hard programs). To address this, we propose Constrained Linear Thompson Sampling (COLTS), a sampling-based framework that selects actions by solving perturbed linear programs, which significantly reduces computational costs while matching the regret and risk of prior methods. We develop two main variants: S-COLTS, which ensures zero risk and O~(d3T)\widetilde{O}(\sqrt{d^3 T}) regret given a safe action, and R-COLTS, which achieves O~(d3T)\widetilde{O}(\sqrt{d^3 T}) regret and risk with no instance information. In simulations, these methods match or outperform state of the art SLB approaches while substantially improving scalability. On the technical front, we introduce a novel coupled noise design that ensures frequent `local optimism' about the true optimum, and a scaling-based analysis to handle the per-round variability of constraints.

View on arXiv
@article{gangrade2025_2503.02043,
  title={ Constrained Linear Thompson Sampling },
  author={ Aditya Gangrade and Venkatesh Saligrama },
  journal={arXiv preprint arXiv:2503.02043},
  year={ 2025 }
}
Comments on this paper