Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.11397
Cited By
Are sample means in multi-armed bandits positively or negatively biased?
27 May 2019
Jaehyeok Shin
Aaditya Ramdas
Alessandro Rinaldo
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Are sample means in multi-armed bandits positively or negatively biased?"
10 / 10 papers shown
Title
Replicability is Asymptotically Free in Multi-armed Bandits
Junpei Komiyama
Shinji Ito
Yuichi Yoshida
Souta Koshino
35
1
0
12 Feb 2024
Entropy Regularization for Population Estimation
Ben Chugg
Peter Henderson
Jacob Goldin
Daniel E. Ho
28
3
0
24 Aug 2022
Algorithms for Adaptive Experiments that Trade-off Statistical Analysis with Reward: Combining Uniform Random Assignment and Reward Maximization
Tong Li
Jacob Nogas
Haochen Song
Harsh Kumar
A. Durand
Anna N. Rafferty
Nina Deliu
S. Villar
Joseph Jay Williams
31
5
0
15 Dec 2021
Safe Data Collection for Offline and Online Policy Learning
Ruihao Zhu
B. Kveton
OffRL
19
5
0
08 Nov 2021
Metalearning Linear Bandits by Prior Update
Amit Peleg
Naama Pearl
Ron Meir
39
18
0
12 Jul 2021
Near-optimal inference in adaptive linear regression
K. Khamaru
Y. Deshpande
Tor Lattimore
Lester W. Mackey
Martin J. Wainwright
30
16
0
05 Jul 2021
Learning from an Exploring Demonstrator: Optimal Reward Estimation for Bandits
Wenshuo Guo
Kumar Krishna Agrawal
Aditya Grover
Vidya Muthukumar
A. Pananjady
16
8
0
28 Jun 2021
Policy Learning with Adaptively Collected Data
Ruohan Zhan
Zhimei Ren
Susan Athey
Zhengyuan Zhou
OffRL
45
27
0
05 May 2021
Challenges in Statistical Analysis of Data Collected by a Bandit Algorithm: An Empirical Exploration in Applications to Adaptively Randomized Experiments
Joseph Jay Williams
Jacob Nogas
Nina Deliu
Hammad Shaikh
S. Villar
A. Durand
Anna N. Rafferty
AAML
30
10
0
22 Mar 2021
Inference for Batched Bandits
Kelly W. Zhang
Lucas Janson
Susan Murphy
28
80
0
08 Feb 2020
1