ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1505.00146
  4. Cited By
Thompson Sampling for Budgeted Multi-armed Bandits

Thompson Sampling for Budgeted Multi-armed Bandits

1 May 2015
Yingce Xia
Haifang Li
Tao Qin
Nenghai Yu
Tie-Yan Liu
ArXivPDFHTML

Papers citing "Thompson Sampling for Budgeted Multi-armed Bandits"

6 / 6 papers shown
Title
Bandits with Knapsacks
Bandits with Knapsacks
Ashwinkumar Badanidiyuru
Robert D. Kleinberg
Aleksandrs Slivkins
69
429
0
11 May 2013
Further Optimal Regret Bounds for Thompson Sampling
Further Optimal Regret Bounds for Thompson Sampling
Shipra Agrawal
Navin Goyal
92
443
0
15 Sep 2012
Thompson Sampling: An Asymptotically Optimal Finite Time Analysis
Thompson Sampling: An Asymptotically Optimal Finite Time Analysis
E. Kaufmann
N. Korda
Rémi Munos
102
585
0
18 May 2012
Knapsack based Optimal Policies for Budget-Limited Multi-Armed Bandits
Knapsack based Optimal Policies for Budget-Limited Multi-Armed Bandits
Long Tran-Thanh
Archie C. Chapman
A. Rogers
N. Jennings
76
193
0
09 Apr 2012
The KL-UCB Algorithm for Bounded Stochastic Bandits and Beyond
The KL-UCB Algorithm for Bounded Stochastic Bandits and Beyond
Aurélien Garivier
Olivier Cappé
104
613
0
12 Feb 2011
A Contextual-Bandit Approach to Personalized News Article Recommendation
A Contextual-Bandit Approach to Personalized News Article Recommendation
Lihong Li
Wei Chu
John Langford
Robert Schapire
277
2,935
0
28 Feb 2010
1