ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.08151
  4. Cited By
SIC-MMAB: Synchronisation Involves Communication in Multiplayer
  Multi-Armed Bandits

SIC-MMAB: Synchronisation Involves Communication in Multiplayer Multi-Armed Bandits

21 September 2018
Etienne Boursier
Vianney Perchet
ArXivPDFHTML

Papers citing "SIC-MMAB: Synchronisation Involves Communication in Multiplayer Multi-Armed Bandits"

21 / 21 papers shown
Title
Learning to Control Unknown Strongly Monotone Games
Learning to Control Unknown Strongly Monotone Games
Siddharth Chandak
Ilai Bistritz
Nicholas Bambos
50
3
0
30 Jun 2024
Learning to Mitigate Externalities: the Coase Theorem with Hindsight Rationality
Learning to Mitigate Externalities: the Coase Theorem with Hindsight Rationality
Antoine Scheid
Aymeric Capitaine
Etienne Boursier
Eric Moulines
Michael I. Jordan
Alain Durmus
50
3
0
28 Jun 2024
Cooperative Multi-Agent Graph Bandits: UCB Algorithm and Regret Analysis
Cooperative Multi-Agent Graph Bandits: UCB Algorithm and Regret Analysis
Phevos Paschalidis
Runyu Zhang
Na Li
38
0
0
18 Jan 2024
Harnessing the Power of Federated Learning in Federated Contextual
  Bandits
Harnessing the Power of Federated Learning in Federated Contextual Bandits
Chengshuai Shi
Ruida Zhou
Kun Yang
Cong Shen
FedML
35
0
0
26 Dec 2023
Collaborative Multi-Agent Heterogeneous Multi-Armed Bandits
Collaborative Multi-Agent Heterogeneous Multi-Armed Bandits
Ronshee Chawla
Daniel Vial
Sanjay Shakkottai
R. Srikant
43
4
0
30 May 2023
Decentralized Stochastic Multi-Player Multi-Armed Walking Bandits
Decentralized Stochastic Multi-Player Multi-Armed Walking Bandits
Guojun Xiong
Jiaqiang Li
44
1
0
12 Dec 2022
A survey on multi-player bandits
A survey on multi-player bandits
Etienne Boursier
Vianney Perchet
37
14
0
29 Nov 2022
Federated Online Clustering of Bandits
Federated Online Clustering of Bandits
Xutong Liu
Haoruo Zhao
Tong Yu
Shuai Li
John C. S. Lui
FedML
41
14
0
31 Aug 2022
Collaborative Algorithms for Online Personalized Mean Estimation
Collaborative Algorithms for Online Personalized Mean Estimation
Mahsa Asadi
A. Bellet
Odalric-Ambrym Maillard
Marc Tommasi
FedML
83
4
0
24 Aug 2022
Distributed Bandits with Heterogeneous Agents
Distributed Bandits with Heterogeneous Agents
Lin Yang
Y. Chen
Mohammad Hajiesmaili
John C. S. Lui
Don Towsley
58
21
0
23 Jan 2022
An Instance-Dependent Analysis for the Cooperative Multi-Player
  Multi-Armed Bandit
An Instance-Dependent Analysis for the Cooperative Multi-Player Multi-Armed Bandit
Aldo Pacchiano
Peter L. Bartlett
Michael I. Jordan
29
5
0
08 Nov 2021
Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and
  Generalization
Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and Generalization
Chengshuai Shi
Wei Xiong
Cong Shen
Jing Yang
30
23
0
27 Oct 2021
Multi-armed Bandit Algorithms on System-on-Chip: Go Frequentist or
  Bayesian?
Multi-armed Bandit Algorithms on System-on-Chip: Go Frequentist or Bayesian?
S. Santosh
S. Darak
24
0
0
05 Jun 2021
Federated Multi-Armed Bandits
Federated Multi-Armed Bandits
Chengshuai Shi
Cong Shen
FedML
83
92
0
28 Jan 2021
On No-Sensing Adversarial Multi-player Multi-armed Bandits with
  Collision Communications
On No-Sensing Adversarial Multi-player Multi-armed Bandits with Collision Communications
Chengshuai Shi
Cong Shen
AAML
24
9
0
02 Nov 2020
Multi-Agent Low-Dimensional Linear Bandits
Multi-Agent Low-Dimensional Linear Bandits
Ronshee Chawla
Abishek Sankararaman
Sanjay Shakkottai
37
10
0
02 Jul 2020
Decentralized Learning for Channel Allocation in IoT Networks over
  Unlicensed Bandwidth as a Contextual Multi-player Multi-armed Bandit Game
Decentralized Learning for Channel Allocation in IoT Networks over Unlicensed Bandwidth as a Contextual Multi-player Multi-armed Bandit Game
Wenbo Wang
Amir Leshem
Dusit Niyato
Zhu Han
19
16
0
30 Mar 2020
Decentralized Multi-player Multi-armed Bandits with No Collision
  Information
Decentralized Multi-player Multi-armed Bandits with No Collision Information
Chengshuai Shi
Wei Xiong
Cong Shen
Jing Yang
33
36
0
29 Feb 2020
Coordination without communication: optimal regret in two players
  multi-armed bandits
Coordination without communication: optimal regret in two players multi-armed bandits
Sébastien Bubeck
Thomas Budzinski
46
23
0
14 Feb 2020
Non-Stochastic Multi-Player Multi-Armed Bandits: Optimal Rate With
  Collision Information, Sublinear Without
Non-Stochastic Multi-Player Multi-Armed Bandits: Optimal Rate With Collision Information, Sublinear Without
Sébastien Bubeck
Yuanzhi Li
Yuval Peres
Mark Sellke
24
45
0
28 Apr 2019
Multiplayer Multi-armed Bandits for Optimal Assignment in Heterogeneous
  Networks
Multiplayer Multi-armed Bandits for Optimal Assignment in Heterogeneous Networks
Harshvardhan Tibrewal
Sravan Patchala
M. Hanawal
S. Darak
25
9
0
12 Jan 2019
1