Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1904.03293
Cited By
Collaborative Learning with Limited Interaction: Tight Bounds for Distributed Exploration in Multi-Armed Bandits
5 April 2019
Chao Tao
Qin Zhang
Yuanshuo Zhou
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Collaborative Learning with Limited Interaction: Tight Bounds for Distributed Exploration in Multi-Armed Bandits"
16 / 16 papers shown
Title
Near Optimal Best Arm Identification for Clustered Bandits
Yash
Nikhil Karamchandani
Avishek Ghosh
33
0
0
15 May 2025
Optimal Streaming Algorithms for Multi-Armed Bandits
Tianyuan Jin
Keke Huang
Jing Tang
Xiaokui Xiao
46
13
0
23 Oct 2024
Batched Stochastic Bandit for Nondegenerate Functions
Yu Liu
Yunlu Shu
Tianyu Wang
57
0
0
09 May 2024
Optimal Batched Best Arm Identification
Tianyuan Jin
Yu Yang
Jing Tang
Xiaokui Xiao
Pan Xu
51
3
0
21 Oct 2023
Communication-Efficient Collaborative Regret Minimization in Multi-Armed Bandits
Nikolai Karpov
Qin Zhang
39
1
0
26 Jan 2023
Distributed Linear Bandits under Communication Constraints
Sudeep Salgia
Qing Zhao
48
7
0
04 Nov 2022
Federated Best Arm Identification with Heterogeneous Clients
Zhirui Chen
P. Karthik
Vincent Y. F. Tan
Yeow Meng Chee
FedML
57
8
0
14 Oct 2022
Collaborative Algorithms for Online Personalized Mean Estimation
Mahsa Asadi
A. Bellet
Odalric-Ambrym Maillard
Marc Tommasi
FedML
83
4
0
24 Aug 2022
Almost Cost-Free Communication in Federated Best Arm Identification
Kota Srinivas Reddy
P. Karthik
Vincent Y. F. Tan
FedML
49
11
0
19 Aug 2022
Parallel Best Arm Identification in Heterogeneous Environments
Nikolai Karpov
Qin Zhang
55
8
0
16 Jul 2022
Near-Optimal Collaborative Learning in Bandits
Clémence Réda
Sattar Vakili
E. Kaufmann
FedML
45
21
0
31 May 2022
Collaborative Pure Exploration in Kernel Bandit
Yihan Du
Wei Chen
Yuko Kuroki
Longbo Huang
53
10
0
29 Oct 2021
Online Learning for Cooperative Multi-Player Multi-Armed Bandits
William Chang
Mehdi Jafarnia-Jahromi
Rahul Jain
31
7
0
07 Sep 2021
Cooperative Stochastic Multi-agent Multi-armed Bandits Robust to Adversarial Corruptions
Junyan Liu
Shuai Li
Dapeng Li
28
6
0
08 Jun 2021
Linear Bandits with Limited Adaptivity and Learning Distributional Optimal Design
Yufei Ruan
Jiaqi Yang
Yuanshuo Zhou
OffRL
118
52
0
04 Jul 2020
Exploration with Limited Memory: Streaming Algorithms for Coin Tossing, Noisy Comparisons, and Multi-Armed Bandits
Sepehr Assadi
Chen Wang
45
17
0
09 Apr 2020
1