Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
0907.3986
Cited By
Contextual Bandits with Similarity Information
23 July 2009
Aleksandrs Slivkins
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Contextual Bandits with Similarity Information"
50 / 162 papers shown
Title
Quantum Lipschitz Bandits
Bongsoo Yi
Yue Kang
Yao Li
44
1
0
03 Apr 2025
Learn to Bid as a Price-Maker Wind Power Producer
Shobhit Singhal
Marta Fochesato
Liviu Aolaritei
Florian Dorfler
70
0
0
20 Mar 2025
Sparse Nonparametric Contextual Bandits
Hamish Flynn
Julia Olkhovskaya
Paul Rognon-Vael
58
0
0
20 Mar 2025
Greedy Algorithm for Structured Bandits: A Sharp Characterization of Asymptotic Success / Failure
Aleksandrs Slivkins
Yunzong Xu
Shiliang Zuo
86
1
0
06 Mar 2025
A Tight Regret Analysis of Non-Parametric Repeated Contextual Brokerage
François Bachoc
Tommaso Cesari
Roberto Colomboni
47
0
0
03 Mar 2025
Functional multi-armed bandit and the best function identification problems
Yuriy Dorn
Aleksandr Katrutsa
Ilgam Latypov
Anastasiia Soboleva
32
0
0
01 Mar 2025
A Complete Characterization of Learnability for Stochastic Noisy Bandits
Steve Hanneke
Kun Wang
42
0
0
20 Jan 2025
On The Statistical Complexity of Offline Decision-Making
Thanh Nguyen-Tang
R. Arora
OffRL
53
1
0
10 Jan 2025
Contextual Bandits for Unbounded Context Distributions
Puning Zhao
Xiaogang Xu
Zhe Liu
Huiwen Wu
Qin Zhang
Zong Ke
Tianhang Zheng
74
4
0
19 Aug 2024
Batched Stochastic Bandit for Nondegenerate Functions
Yu Liu
Yunlu Shu
Tianyu Wang
52
0
0
09 May 2024
Near-optimal Per-Action Regret Bounds for Sleeping Bandits
Quan Nguyen
Nishant A. Mehta
26
1
0
02 Mar 2024
Bandits with Abstention under Expert Advice
Stephen Pasteris
Alberto Rumi
Maximilian Thiessen
Shota Saito
Atsushi Miyauchi
Fabio Vitale
Mark Herbster
28
1
0
22 Feb 2024
Understanding What Affects Generalization Gap in Visual Reinforcement Learning: Theory and Empirical Evidence
Jiafei Lyu
Le Wan
Xiu Li
Zongqing Lu
CML
OffRL
43
4
0
05 Feb 2024
A Hierarchical Nearest Neighbour Approach to Contextual Bandits
Stephen Pasteris
Chris Hicks
V. Mavroudis
11
1
0
14 Dec 2023
An Improved Relaxation for Oracle-Efficient Adversarial Contextual Bandits
Kiarash Banihashem
Mohammadtaghi Hajiaghayi
Suho Shin
Max Springer
16
1
0
29 Oct 2023
Off-Policy Evaluation for Large Action Spaces via Policy Convolution
Noveen Sachdeva
Lequn Wang
Dawen Liang
Nathan Kallus
Julian McAuley
OffRL
40
12
0
24 Oct 2023
α
α
α
-Fair Contextual Bandits
Siddhant Chaudhary
Abhishek Sinha
FaML
39
0
0
22 Oct 2023
Online Algorithms with Uncertainty-Quantified Predictions
Bo Sun
Jerry Huang
Nicolas H. Christianson
Mohammad Hajiesmaili
Adam Wierman
Raouf Boutaba
36
4
0
17 Oct 2023
Byzantine-Resilient Decentralized Multi-Armed Bandits
Jingxuan Zhu
Alec Koppel
Alvaro Velasquez
Ji Liu
20
6
0
11 Oct 2023
Doubly High-Dimensional Contextual Bandits: An Interpretable Model for Joint Assortment-Pricing
Junhui Cai
Ran Chen
Martin J. Wainwright
Linda H. Zhao
22
4
0
14 Sep 2023
Clustered Linear Contextual Bandits with Knapsacks
Yichuan Deng
M. Mamakos
Zhao Song
24
0
0
21 Aug 2023
Corruption-Robust Lipschitz Contextual Search
Shiliang Zuo
19
1
0
26 Jul 2023
Tracking Most Significant Shifts in Nonparametric Contextual Bandits
Joe Suk
Samory Kpotufe
38
5
0
11 Jul 2023
Online Network Source Optimization with Graph-Kernel MAB
Laura Toni
P. Frossard
34
1
0
07 Jul 2023
Kernel
ε
ε
ε
-Greedy for Contextual Bandits
Sakshi Arya
Bharath K. Sriperumbudur
19
0
0
29 Jun 2023
Nearest Neighbour with Bandit Feedback
Stephen Pasteris
Chris Hicks
V. Mavroudis
16
3
0
23 Jun 2023
Oracle-Efficient Pessimism: Offline Policy Optimization in Contextual Bandits
Lequn Wang
A. Krishnamurthy
Aleksandrs Slivkins
OffRL
46
9
0
13 Jun 2023
Cooperative Thresholded Lasso for Sparse Linear Bandit
Haniyeh Barghi
Xiaotong Cheng
S. Maghsudi
34
0
0
30 May 2023
Robust Lipschitz Bandits to Adversarial Corruptions
Yue Kang
Cho-Jui Hsieh
T. C. Lee
AAML
32
8
0
29 May 2023
From Random Search to Bandit Learning in Metric Measure Spaces
Chuying Han
Yasong Feng
Tianyu Wang
11
2
0
19 May 2023
Stochastic Contextual Bandits with Graph-based Contexts
Jittat Fakcharoenphol
Chayutpong Prompak
8
0
0
02 May 2023
Online Learning for Equilibrium Pricing in Markets under Incomplete Information
Devansh Jalota
Haoyuan Sun
Navid Azizan
29
2
0
21 Mar 2023
A Lipschitz Bandits Approach for Continuous Hyperparameter Optimization
Yasong Feng
Weijian Luo
Yimin Huang
Tianyu Wang
26
8
0
03 Feb 2023
Contextual Bandits and Optimistically Universal Learning
Moise Blanchard
Steve Hanneke
Patrick Jaillet
OffRL
28
1
0
31 Dec 2022
Online Learning for Adaptive Probing and Scheduling in Dense WLANs
Tianyi Xu
Ding Zhang
Zizhan Zheng
21
2
0
27 Dec 2022
On the Sample Complexity of Representation Learning in Multi-task Bandits with Global and Local structure
Alessio Russo
Alexandre Proutiere
30
2
0
28 Nov 2022
Optimal Contextual Bandits with Knapsacks under Realizability via Regression Oracles
Yuxuan Han
Jialin Zeng
Yang Wang
Yangzhen Xiang
Jiheng Zhang
59
9
0
21 Oct 2022
Artificial Replay: A Meta-Algorithm for Harnessing Historical Data in Bandits
Siddhartha Banerjee
Sean R. Sinclair
Milind Tambe
Lily Xu
Chao Yu
AI4TS
33
6
0
30 Sep 2022
Non-monotonic Resource Utilization in the Bandits with Knapsacks Problem
Raunak Kumar
Robert D. Kleinberg
23
13
0
24 Sep 2022
Risk-Averse Multi-Armed Bandits with Unobserved Confounders: A Case Study in Emotion Regulation in Mobile Health
Yi Shen
J. Dunn
Michael M. Zavlanos
19
1
0
09 Sep 2022
Dynamic collaborative filtering Thompson Sampling for cross-domain advertisements recommendation
Shion Ishikawa
Young-joo Chung
Yuya Hirate
16
1
0
25 Aug 2022
Autonomous Drug Design with Multi-Armed Bandits
Hampus Gummesson Svensson
E. Bjerrum
C. Tyrchan
Ola Engkvist
M. Chehreghani
37
5
0
04 Jul 2022
Contextual Combinatorial Multi-output GP Bandits with Group Constraints
Sepehr Elahi
Baran Atalar
Sevda Öğüt
Cem Tekin
30
2
0
29 Nov 2021
Adaptive Discretization in Online Reinforcement Learning
Sean R. Sinclair
Siddhartha Banerjee
Chao Yu
OffRL
45
15
0
29 Oct 2021
Analysis of Thompson Sampling for Partially Observable Contextual Multi-Armed Bandits
Yash J. Patel
Mohamad Kazem Shirani Faradonbeh
16
15
0
23 Oct 2021
Lipschitz Bandits with Batched Feedback
Yasong Feng
Zengfeng Huang
Tianyu Wang
18
14
0
19 Oct 2021
Contextual Combinatorial Bandits with Changing Action Sets via Gaussian Processes
Andi Nika
Sepehr Elahi
Cem Tekin
30
2
0
05 Oct 2021
Distributionally Robust Learning
Ruidi Chen
I. Paschalidis
OOD
32
65
0
20 Aug 2021
Joint AP Probing and Scheduling: A Contextual Bandit Approach
Tianyi Xu
Ding Zhang
Parth H. Pathak
Zizhan Zheng
19
1
0
06 Aug 2021
An Adaptive State Aggregation Algorithm for Markov Decision Processes
Guanting Chen
Johann D. Gaebler
M. Peng
Chunlin Sun
Yinyu Ye
13
6
0
23 Jul 2021
1
2
3
4
Next