ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.04623
11
125

Thompson Sampling for Combinatorial Semi-Bandits

13 March 2018
Siwei Wang
Wei Chen
ArXivPDFHTML
Abstract

In this paper, we study the application of the Thompson sampling (TS) methodology to the stochastic combinatorial multi-armed bandit (CMAB) framework. We first analyze the standard TS algorithm for the general CMAB model when the outcome distributions of all the base arms are independent, and obtain a distribution-dependent regret bound of O(mlog⁡Kmax⁡log⁡T/Δmin⁡)O(m\log K_{\max}\log T / \Delta_{\min})O(mlogKmax​logT/Δmin​), where mmm is the number of base arms, Kmax⁡K_{\max}Kmax​ is the size of the largest super arm, TTT is the time horizon, and Δmin⁡\Delta_{\min}Δmin​ is the minimum gap between the expected reward of the optimal solution and any non-optimal solution. This regret upper bound is better than the O(m(log⁡Kmax⁡)2log⁡T/Δmin⁡)O(m(\log K_{\max})^2\log T / \Delta_{\min})O(m(logKmax​)2logT/Δmin​) bound in prior works. Moreover, our novel analysis techniques can help to tighten the regret bounds of other existing UCB-based policies (e.g., ESCB), as we improve the method of counting the cumulative regret. Then we consider the matroid bandit setting (a special class of CMAB model), where we could remove the independence assumption across arms and achieve a regret upper bound that matches the lower bound. Except for the regret upper bounds, we also point out that one cannot directly replace the exact offline oracle (which takes the parameters of an offline problem instance as input and outputs the exact best action under this instance) with an approximation oracle in TS algorithm for even the classical MAB problem. Finally, we use some experiments to show the comparison between regrets of TS and other existing algorithms, the experimental results show that TS outperforms existing baselines.

View on arXiv
Comments on this paper