ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1401.8257
  4. Cited By
Online Clustering of Bandits

Online Clustering of Bandits

31 January 2014
Claudio Gentile
Shuai Li
Giovanni Zappella
ArXivPDFHTML

Papers citing "Online Clustering of Bandits"

44 / 44 papers shown
Title
CoCoB: Adaptive Collaborative Combinatorial Bandits for Online Recommendation
CoCoB: Adaptive Collaborative Combinatorial Bandits for Online Recommendation
Cairong Yan
Jinyi Han
Jin Ju
Yanting Zhang
Zijian Wang
Xuan Shao
29
0
0
05 May 2025
Explaining the Success of Nearest Neighbor Methods in Prediction
George H. Chen
Devavrat Shah
OOD
69
145
0
21 Feb 2025
Graph Feedback Bandits on Similar Arms: With and Without Graph Structures
Graph Feedback Bandits on Similar Arms: With and Without Graph Structures
Han Qi
Fei-Yu Guo
Li Zhu
Qiaosheng Zhang
X. Li
38
0
0
24 Jan 2025
Graph Feedback Bandits with Similar Arms
Graph Feedback Bandits with Similar Arms
Han Qi
Guo Fei
Li Zhu
27
0
0
18 May 2024
Adaptive Interventions with User-Defined Goals for Health Behavior
  Change
Adaptive Interventions with User-Defined Goals for Health Behavior Change
Aishwarya Mandyam
Matthew Joerke
William Denton
Barbara E. Engelhardt
Emma Brunskill
32
1
0
16 Nov 2023
Concentrated Differential Privacy for Bandits
Concentrated Differential Privacy for Bandits
Achraf Azize
D. Basu
28
4
0
01 Sep 2023
Impression-Aware Recommender Systems
Impression-Aware Recommender Systems
F. B. P. Maurera
Maurizio Ferrari Dacrema
P. Castells
Paolo Cremonesi
AI4TS
40
2
0
15 Aug 2023
Online Network Source Optimization with Graph-Kernel MAB
Online Network Source Optimization with Graph-Kernel MAB
Laura Toni
P. Frossard
26
1
0
07 Jul 2023
Adversarial Online Collaborative Filtering
Adversarial Online Collaborative Filtering
Stephen Pasteris
Fabio Vitale
Mark Herbster
Claudio Gentile
Andre' Panisson
22
0
0
11 Feb 2023
Optimal Algorithms for Latent Bandits with Cluster Structure
Optimal Algorithms for Latent Bandits with Cluster Structure
S. Pal
A. Suggala
Karthikeyan Shanmugam
Prateek Jain
37
9
0
17 Jan 2023
Tractable Optimality in Episodic Latent MABs
Tractable Optimality in Episodic Latent MABs
Jeongyeol Kwon
Yonathan Efroni
C. Caramanis
Shie Mannor
50
3
0
05 Oct 2022
Reward-Mixing MDPs with a Few Latent Contexts are Learnable
Reward-Mixing MDPs with a Few Latent Contexts are Learnable
Jeongyeol Kwon
Yonathan Efroni
C. Caramanis
Shie Mannor
31
5
0
05 Oct 2022
Federated Online Clustering of Bandits
Federated Online Clustering of Bandits
Xutong Liu
Haoruo Zhao
Tong Yu
Shuai Li
John C. S. Lui
FedML
27
14
0
31 Aug 2022
Exploration in Linear Bandits with Rich Action Sets and its Implications
  for Inference
Exploration in Linear Bandits with Rich Action Sets and its Implications for Inference
Debangshu Banerjee
Avishek Ghosh
Sayak Ray Chowdhury
Aditya Gopalan
35
9
0
23 Jul 2022
A Simple and Provably Efficient Algorithm for Asynchronous Federated
  Contextual Linear Bandits
A Simple and Provably Efficient Algorithm for Asynchronous Federated Contextual Linear Bandits
Jiafan He
Tianhao Wang
Yifei Min
Quanquan Gu
FedML
38
32
0
07 Jul 2022
Private and Byzantine-Proof Cooperative Decision-Making
Private and Byzantine-Proof Cooperative Decision-Making
Abhimanyu Dubey
Alex Pentland
19
24
0
27 May 2022
Breaking the $\sqrt{T}$ Barrier: Instance-Independent Logarithmic Regret
  in Stochastic Contextual Linear Bandits
Breaking the T\sqrt{T}T​ Barrier: Instance-Independent Logarithmic Regret in Stochastic Contextual Linear Bandits
Avishek Ghosh
Abishek Sankararaman
27
3
0
19 May 2022
Non-stationary Bandits and Meta-Learning with a Small Set of Optimal
  Arms
Non-stationary Bandits and Meta-Learning with a Small Set of Optimal Arms
Javad Azizi
T. Duong
Yasin Abbasi-Yadkori
András Gyorgy
Claire Vernade
Mohammad Ghavamzadeh
34
8
0
25 Feb 2022
Meta-Learning for Simple Regret Minimization
Meta-Learning for Simple Regret Minimization
Javad Azizi
B. Kveton
Mohammad Ghavamzadeh
S. Katariya
22
10
0
25 Feb 2022
Neural Collaborative Filtering Bandits via Meta Learning
Neural Collaborative Filtering Bandits via Meta Learning
Yikun Ban
Yunzhe Qi
Tianxin Wei
Jingrui He
OffRL
33
9
0
31 Jan 2022
Margin-Independent Online Multiclass Learning via Convex Geometry
Margin-Independent Online Multiclass Learning via Convex Geometry
Guru Guruganesh
Allen Liu
Jon Schneider
Joshua R. Wang
14
0
0
15 Nov 2021
Hierarchical Bayesian Bandits
Hierarchical Bayesian Bandits
Joey Hong
B. Kveton
Manzil Zaheer
Mohammad Ghavamzadeh
FedML
47
38
0
12 Nov 2021
Metadata-based Multi-Task Bandits with Bayesian Hierarchical Models
Metadata-based Multi-Task Bandits with Bayesian Hierarchical Models
Runzhe Wan
Linjuan Ge
Rui Song
36
28
0
13 Aug 2021
Bandit Algorithms for Precision Medicine
Bandit Algorithms for Precision Medicine
Yangyi Lu
Ziping Xu
Ambuj Tewari
59
11
0
10 Aug 2021
No Regrets for Learning the Prior in Bandits
No Regrets for Learning the Prior in Bandits
Soumya Basu
B. Kveton
Manzil Zaheer
Csaba Szepesvári
41
33
0
13 Jul 2021
When and Whom to Collaborate with in a Changing Environment: A
  Collaborative Dynamic Bandit Solution
When and Whom to Collaborate with in a Changing Environment: A Collaborative Dynamic Bandit Solution
Chuanhao Li
Qingyun Wu
Hongning Wang
47
5
0
14 Apr 2021
RecSim NG: Toward Principled Uncertainty Modeling for Recommender
  Ecosystems
RecSim NG: Toward Principled Uncertainty Modeling for Recommender Ecosystems
Martin Mladenov
Chih-Wei Hsu
Vihan Jain
Eugene Ie
Christopher Colby
Nicolas Mayoraz
H. Pham
Dustin Tran
Ivan Vendrov
Craig Boutilier
BDL
15
31
0
14 Mar 2021
Meta-Thompson Sampling
Meta-Thompson Sampling
B. Kveton
Mikhail Konobeev
Manzil Zaheer
Chih-Wei Hsu
Martin Mladenov
Craig Boutilier
Csaba Szepesvári
50
61
0
11 Feb 2021
RL for Latent MDPs: Regret Guarantees and a Lower Bound
RL for Latent MDPs: Regret Guarantees and a Lower Bound
Jeongyeol Kwon
Yonathan Efroni
C. Caramanis
Shie Mannor
24
77
0
09 Feb 2021
Meta-learning with Stochastic Linear Bandits
Meta-learning with Stochastic Linear Bandits
Leonardo Cella
A. Lazaric
Massimiliano Pontil
FedML
22
56
0
18 May 2020
Categorized Bandits
Categorized Bandits
Matthieu Jedor
Jonathan Louëdec
Vianney Perchet
12
11
0
04 May 2020
Optimal Exploitation of Clustering and History Information in
  Multi-Armed Bandit
Optimal Exploitation of Clustering and History Information in Multi-Armed Bandit
Djallel Bouneffouf
Srinivasan Parthasarathy
Horst Samulowitz
Martin Wistuba
11
29
0
31 May 2019
Improved Algorithm on Online Clustering of Bandits
Improved Algorithm on Online Clustering of Bandits
Shuai Li
Wei Chen
Shuai Li
K. Leung
11
68
0
25 Feb 2019
Context-Based Dynamic Pricing with Online Clustering
Context-Based Dynamic Pricing with Online Clustering
Sentao Miao
Xi Chen
X. Chao
Jiaxi Liu
Yidong Zhang
27
31
0
17 Feb 2019
Bilinear Bandits with Low-rank Structure
Bilinear Bandits with Low-rank Structure
Kwang-Sung Jun
Rebecca Willett
S. Wright
Robert D. Nowak
20
60
0
08 Jan 2019
Simple Regret Minimization for Contextual Bandits
Simple Regret Minimization for Contextual Bandits
A. Deshmukh
Srinagesh Sharma
J. Cutler
M. Moldwin
Clayton Scott
14
24
0
17 Oct 2018
PG-TS: Improved Thompson Sampling for Logistic Contextual Bandits
PG-TS: Improved Thompson Sampling for Logistic Contextual Bandits
Bianca Dumitrascu
Karen Feng
Barbara E. Engelhardt
19
40
0
18 May 2018
Multi-objective Contextual Bandit Problem with Similarity Information
Multi-objective Contextual Bandit Problem with Similarity Information
E. Turğay
Doruk Öner
Cem Tekin
13
36
0
11 Mar 2018
Stochastic Low-Rank Bandits
Stochastic Low-Rank Bandits
B. Kveton
Csaba Szepesvári
Anup B. Rao
Zheng Wen
Yasin Abbasi-Yadkori
S. Muthukrishnan
16
39
0
13 Dec 2017
Context-Aware Hierarchical Online Learning for Performance Maximization
  in Mobile Crowdsourcing
Context-Aware Hierarchical Online Learning for Performance Maximization in Mobile Crowdsourcing
Sabrina Klos née Müller
Cem Tekin
Mihaela van der Schaar
A. Klein
19
35
0
10 May 2017
Horde of Bandits using Gaussian Markov Random Fields
Horde of Bandits using Gaussian Markov Random Fields
Sharan Vaswani
Mark W. Schmidt
L. Lakshmanan
21
14
0
07 Mar 2017
On Context-Dependent Clustering of Bandits
On Context-Dependent Clustering of Bandits
Claudio Gentile
Shuai Li
Purushottam Kar
Alexandros Karatzoglou
Evans Etrue
Giovanni Zappella
15
138
0
06 Aug 2016
Latent Contextual Bandits and their Application to Personalized
  Recommendations for New Users
Latent Contextual Bandits and their Application to Personalized Recommendations for New Users
Li Zhou
Emma Brunskill
19
62
0
22 Apr 2016
Regret Guarantees for Item-Item Collaborative Filtering
Regret Guarantees for Item-Item Collaborative Filtering
Guy Bresler
Devavrat Shah
L. Voloch
15
28
0
20 Jul 2015
1