Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2303.06058
Cited By
v1
v2 (latest)
A General Recipe for the Analysis of Randomized Multi-Armed Bandit Algorithms
10 March 2023
Dorian Baudry
Kazuya Suzuki
Junya Honda
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"A General Recipe for the Analysis of Randomized Multi-Armed Bandit Algorithms"
8 / 8 papers shown
Title
Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits
Tianyuan Jin
Pan Xu
X. Xiao
Anima Anandkumar
71
13
0
07 Jun 2022
From Dirichlet to Rubin: Optimistic Exploration in RL without Bonuses
D. Tiapkin
Denis Belomestny
Eric Moulines
A. Naumov
S. Samsonov
Yunhao Tang
Michal Valko
Pierre Menard
86
19
0
16 May 2022
Regret Minimization in Heavy-Tailed Bandits
Shubhada Agrawal
Sandeep Juneja
Wouter M. Koolen
47
32
0
07 Feb 2021
Sub-sampling for Efficient Non-Parametric Bandit Exploration
Dorian Baudry
E. Kaufmann
Odalric-Ambrym Maillard
31
13
0
27 Oct 2020
Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits
Branislav Kveton
Csaba Szepesvári
Sharan Vaswani
Zheng Wen
Mohammad Ghavamzadeh
Tor Lattimore
142
70
0
13 Nov 2018
Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors
Junya Honda
Akimichi Takemura
57
63
0
08 Nov 2013
Kullback-Leibler upper confidence bounds for optimal sequential allocation
Olivier Cappé
Aurélien Garivier
Odalric-Ambrym Maillard
Rémi Munos
Gilles Stoltz
127
395
0
03 Oct 2012
Bandits with heavy tail
Sébastien Bubeck
Nicolò Cesa-Bianchi
Gábor Lugosi
184
291
0
08 Sep 2012
1