ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.01803
  4. Cited By
MOTS: Minimax Optimal Thompson Sampling

MOTS: Minimax Optimal Thompson Sampling

3 March 2020
Tianyuan Jin
Pan Xu
Jieming Shi
Xiaokui Xiao
Quanquan Gu
ArXivPDFHTML

Papers citing "MOTS: Minimax Optimal Thompson Sampling"

10 / 10 papers shown
Title
Connecting Thompson Sampling and UCB: Towards More Efficient Trade-offs Between Privacy and Regret
Connecting Thompson Sampling and UCB: Towards More Efficient Trade-offs Between Privacy and Regret
Bingshan Hu
Zhiming Huang
Tianyue H. Zhang
Mathias Lécuyer
Nidhi Hegde
17
0
0
05 May 2025
Constrained Exploration via Reflected Replica Exchange Stochastic
  Gradient Langevin Dynamics
Constrained Exploration via Reflected Replica Exchange Stochastic Gradient Langevin Dynamics
Haoyang Zheng
Hengrong Du
Qi Feng
Wei Deng
Guang Lin
44
4
0
13 May 2024
Efficient and Adaptive Posterior Sampling Algorithms for Bandits
Efficient and Adaptive Posterior Sampling Algorithms for Bandits
Bingshan Hu
Zhiming Huang
Tianyue H. Zhang
Mathias Lécuyer
Nidhi Hegde
23
0
0
02 May 2024
Multi-Armed Bandits with Abstention
Multi-Armed Bandits with Abstention
Junwen Yang
Tianyuan Jin
Vincent Y. F. Tan
31
0
0
23 Feb 2024
Zero-Inflated Bandits
Zero-Inflated Bandits
Haoyu Wei
Runzhe Wan
Lei Shi
Rui Song
42
0
0
25 Dec 2023
VITS : Variational Inference Thompson Sampling for contextual bandits
VITS : Variational Inference Thompson Sampling for contextual bandits
Pierre Clavier
Tom Huix
Alain Durmus
29
3
0
19 Jul 2023
Kullback-Leibler Maillard Sampling for Multi-armed Bandits with Bounded
  Rewards
Kullback-Leibler Maillard Sampling for Multi-armed Bandits with Bounded Rewards
Hao Qin
Kwang-Sung Jun
Chicheng Zhang
41
0
0
28 Apr 2023
Batched Thompson Sampling for Multi-Armed Bandits
Batched Thompson Sampling for Multi-Armed Bandits
Nikolai Karpov
Qin Zhang
14
4
0
15 Aug 2021
Bandit Algorithms for Precision Medicine
Bandit Algorithms for Precision Medicine
Yangyi Lu
Ziping Xu
Ambuj Tewari
63
11
0
10 Aug 2021
On Bayesian index policies for sequential resource allocation
On Bayesian index policies for sequential resource allocation
E. Kaufmann
41
84
0
06 Jan 2016
1