ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1209.3352
113
993

Thompson Sampling for Contextual Bandits with Linear Payoffs

15 September 2012
Shipra Agrawal
Navin Goyal
ArXivPDFHTML
Abstract

Thompson Sampling is one of the oldest heuristics for multi-armed bandit problems. It is a randomized algorithm based on Bayesian ideas, and has recently generated significant interest after several studies demonstrated it to have better empirical performance compared to the state-of-the-art methods. However, many questions regarding its theoretical performance remained open. In this paper, we design and analyze a generalization of Thompson Sampling algorithm for the stochastic contextual multi-armed bandit problem with linear payoff functions, when the contexts are provided by an adaptive adversary. This is among the most important and widely studied versions of the contextual bandits problem. We provide the first theoretical guarantees for the contextual version of Thompson Sampling. We prove a high probability regret bound of O~(d3/2T)\tilde{O}(d^{3/2}\sqrt{T})O~(d3/2T​) (or O~(dTlog⁡(N))\tilde{O}(d\sqrt{T \log(N)})O~(dTlog(N)​)), which is the best regret bound achieved by any computationally efficient algorithm available for this problem in the current literature, and is within a factor of d\sqrt{d}d​ (or log⁡(N)\sqrt{\log(N)}log(N)​) of the information-theoretic lower bound for this problem.

View on arXiv
Comments on this paper