Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1407.8339
Cited By
Combinatorial Multi-Armed Bandit and Its Extension to Probabilistically Triggered Arms
31 July 2014
Wei Chen
Yajun Wang
Yang Yuan
Qinshi Wang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Combinatorial Multi-Armed Bandit and Its Extension to Probabilistically Triggered Arms"
25 / 25 papers shown
Title
Combinatorial Multivariant Multi-Armed Bandits with Applications to Episodic Reinforcement Learning and Beyond
Xutong Liu
Siwei Wang
Jinhang Zuo
Han Zhong
Xuchuang Wang
Zhiyong Wang
Shuai Li
Mohammad Hajiesmaili
J. C. Lui
Wei Chen
85
1
0
03 Jun 2024
Mode Estimation with Partial Feedback
Charles Arnal
Vivien A. Cabannes
Vianney Perchet
45
0
0
20 Feb 2024
Cooperative Multi-Agent Graph Bandits: UCB Algorithm and Regret Analysis
Phevos Paschalidis
Runyu Zhang
Na Li
28
0
0
18 Jan 2024
Online Influence Maximization under Decreasing Cascade Model
Fang-yuan Kong
Jize Xie
Baoxiang Wang
Tao Yao
Shuai Li
13
5
0
19 May 2023
Multiplier Bootstrap-based Exploration
Runzhe Wan
Haoyu Wei
B. Kveton
R. Song
16
2
0
03 Feb 2023
Multiple-Play Stochastic Bandits with Shareable Finite-Capacity Arms
Xuchuang Wang
Hong Xie
John C. S. Lui
24
6
0
17 Jun 2022
Combinatorial Causal Bandits
Shi Feng
Wei Chen
CML
19
13
0
04 Jun 2022
Networked Restless Multi-Armed Bandits for Mobile Interventions
H. Ou
Christoph Siebenbrunner
J. Killian
M. Brooks
David Kempe
Yevgeniy Vorobeychik
Milind Tambe
37
7
0
28 Jan 2022
Hierarchical Bayesian Bandits
Joey Hong
B. Kveton
Manzil Zaheer
Mohammad Ghavamzadeh
FedML
47
37
0
12 Nov 2021
Online Learning of Independent Cascade Models with Node-level Feedback
Shuoguang Yang
Van-Anh Truong
21
2
0
06 Sep 2021
No Regrets for Learning the Prior in Bandits
Soumya Basu
B. Kveton
Manzil Zaheer
Csaba Szepesvári
41
33
0
13 Jul 2021
Policy Optimization as Online Learning with Mediator Feedback
Alberto Maria Metelli
Matteo Papini
P. DÓro
Marcello Restelli
OffRL
27
10
0
15 Dec 2020
Fully Gap-Dependent Bounds for Multinomial Logit Bandit
Jiaqi Yang
11
2
0
19 Nov 2020
Restless-UCB, an Efficient and Low-complexity Algorithm for Online Restless Bandits
Siwei Wang
Longbo Huang
John C. S. Lui
OffRL
24
38
0
05 Nov 2020
Dual-Mandate Patrols: Multi-Armed Bandits for Green Security
Lily Xu
Elizabeth Bondi-Kelly
Fei Fang
Andrew Perrault
Kai Wang
Milind Tambe
21
44
0
14 Sep 2020
Carousel Personalization in Music Streaming Apps with Contextual Bandits
Walid Bendada
Guillaume Salha-Galvan
Théo Bontempelli
26
56
0
14 Sep 2020
Exploration by Optimisation in Partial Monitoring
Tor Lattimore
Csaba Szepesvári
23
38
0
12 Jul 2019
Adaptive Sensor Placement for Continuous Spaces
James A Grant
A. Boukouvalas
Ryan-Rhys Griffiths
David S Leslie
Sattar Vakili
Enrique Munoz de Cote
16
13
0
16 May 2019
Combinatorial Pure Exploration with Continuous and Separable Reward Functions and Its Applications (Extended Version)
Weiran Huang
Jungseul Ok
Liang-Sheng Li
Wei Chen
13
61
0
04 May 2018
Thompson Sampling for Combinatorial Semi-Bandits
Siwei Wang
Wei Chen
13
125
0
13 Mar 2018
Online Learning: A Comprehensive Survey
Guosheng Lin
Doyen Sahoo
Jing Lu
P. Zhao
OffRL
27
633
0
08 Feb 2018
Combinatorial Multi-Armed Bandits with Filtered Feedback
James A. Grant
David S. Leslie
K. Glazebrook
R. Szechtman
32
1
0
26 May 2017
Improving Regret Bounds for Combinatorial Semi-Bandits with Probabilistically Triggered Arms and Its Applications
Qinshi Wang
Wei Chen
27
85
0
05 Mar 2017
Influence Maximization with Bandits
Sharan Vaswani
L. Lakshmanan
Mark Schmidt
23
65
0
27 Feb 2015
Matroid Bandits: Fast Combinatorial Optimization with Learning
B. Kveton
Zheng Wen
Azin Ashkan
Hoda Eydgahi
Brian Eriksson
46
119
0
20 Mar 2014
1