ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.09353
  4. Cited By
Stochastic bandits robust to adversarial corruptions

Stochastic bandits robust to adversarial corruptions

25 March 2018
Thodoris Lykouris
Vahab Mirrokni
R. Leme
    AAML
ArXivPDFHTML

Papers citing "Stochastic bandits robust to adversarial corruptions"

50 / 51 papers shown
Title
Does Feedback Help in Bandits with Arm Erasures?
Does Feedback Help in Bandits with Arm Erasures?
Merve Karakas
Osama A. Hanna
Lin Yang
Christina Fragouli
35
0
0
29 Apr 2025
Tracking Most Significant Shifts in Infinite-Armed Bandits
Joe Suk
Jung-hun Kim
60
0
0
31 Jan 2025
On the Adversarial Robustness of Benjamini Hochberg
On the Adversarial Robustness of Benjamini Hochberg
Louis L Chen
Roberto Szechtman
Matan Seri
AAML
36
0
0
08 Jan 2025
Beyond IID: data-driven decision-making in heterogeneous environments
Beyond IID: data-driven decision-making in heterogeneous environments
Omar Besbes
Will Ma
Omar Mouchtaki
42
7
0
03 Jan 2025
Optimism in the Face of Ambiguity Principle for Multi-Armed Bandits
Optimism in the Face of Ambiguity Principle for Multi-Armed Bandits
Mengmeng Li
Daniel Kuhn
Bahar Taşkesen
44
0
0
30 Sep 2024
A Simple and Adaptive Learning Rate for FTRL in Online Learning with
  Minimax Regret of $Θ(T^{2/3})$ and its Application to
  Best-of-Both-Worlds
A Simple and Adaptive Learning Rate for FTRL in Online Learning with Minimax Regret of Θ(T2/3)Θ(T^{2/3})Θ(T2/3) and its Application to Best-of-Both-Worlds
Taira Tsuchiya
Shinji Ito
26
0
0
30 May 2024
Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback
Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback
Qiwei Di
Jiafan He
Quanquan Gu
34
1
0
16 Apr 2024
Stealthy Adversarial Attacks on Stochastic Multi-Armed Bandits
Stealthy Adversarial Attacks on Stochastic Multi-Armed Bandits
Zhiwei Wang
Huazheng Wang
Hongning Wang
AAML
49
0
0
21 Feb 2024
CRIMED: Lower and Upper Bounds on Regret for Bandits with Unbounded
  Stochastic Corruption
CRIMED: Lower and Upper Bounds on Regret for Bandits with Unbounded Stochastic Corruption
Shubhada Agrawal
Timothée Mathieu
D. Basu
Odalric-Ambrym Maillard
30
2
0
28 Sep 2023
On the Robustness of Epoch-Greedy in Multi-Agent Contextual Bandit
  Mechanisms
On the Robustness of Epoch-Greedy in Multi-Agent Contextual Bandit Mechanisms
Yinglun Xu
Bhuvesh Kumar
Jacob D. Abernethy
AAML
38
4
0
15 Jul 2023
Adversarial Attacks on Online Learning to Rank with Stochastic Click
  Models
Adversarial Attacks on Online Learning to Rank with Stochastic Click Models
Zichen Wang
R. Balasubramanian
Hui Yuan
Chenyu Song
Mengdi Wang
Huazheng Wang
AAML
37
2
0
30 May 2023
Robust Lipschitz Bandits to Adversarial Corruptions
Robust Lipschitz Bandits to Adversarial Corruptions
Yue Kang
Cho-Jui Hsieh
T. C. Lee
AAML
32
8
0
29 May 2023
A Blackbox Approach to Best of Both Worlds in Bandits and Beyond
A Blackbox Approach to Best of Both Worlds in Bandits and Beyond
Christoph Dann
Chen-Yu Wei
Julian Zimmert
26
22
0
20 Feb 2023
On Private and Robust Bandits
On Private and Robust Bandits
Yulian Wu
Xingyu Zhou
Youming Tao
Di Wang
24
5
0
06 Feb 2023
Corruption-Robust Algorithms with Uncertainty Weighting for Nonlinear
  Contextual Bandits and Markov Decision Processes
Corruption-Robust Algorithms with Uncertainty Weighting for Nonlinear Contextual Bandits and Markov Decision Processes
Chen Ye
Wei Xiong
Quanquan Gu
Tong Zhang
31
29
0
12 Dec 2022
Learning in Stackelberg Games with Non-myopic Agents
Learning in Stackelberg Games with Non-myopic Agents
Nika Haghtalab
Thodoris Lykouris
Sloan Nietert
Alexander Wei
28
29
0
19 Aug 2022
Best of Both Worlds Model Selection
Best of Both Worlds Model Selection
Aldo Pacchiano
Christoph Dann
Claudio Gentile
34
10
0
29 Jun 2022
Collaborative Linear Bandits with Adversarial Agents: Near-Optimal
  Regret Bounds
Collaborative Linear Bandits with Adversarial Agents: Near-Optimal Regret Bounds
A. Mitra
Arman Adibi
George J. Pappas
Hamed Hassani
44
6
0
06 Jun 2022
Nearly Optimal Best-of-Both-Worlds Algorithms for Online Learning with
  Feedback Graphs
Nearly Optimal Best-of-Both-Worlds Algorithms for Online Learning with Feedback Graphs
Shinji Ito
Taira Tsuchiya
Junya Honda
35
24
0
02 Jun 2022
Efficient Reward Poisoning Attacks on Online Deep Reinforcement Learning
Efficient Reward Poisoning Attacks on Online Deep Reinforcement Learning
Yinglun Xu
Qi Zeng
Gagandeep Singh
AAML
40
6
0
30 May 2022
Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial
  Corruptions
Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial Corruptions
Jiafan He
Dongruo Zhou
Tong Zhang
Quanquan Gu
66
46
0
13 May 2022
Federated Multi-Armed Bandits Under Byzantine Attacks
Federated Multi-Armed Bandits Under Byzantine Attacks
Artun Saday
Ilker Demirel
Yiğit Yıldırım
Cem Tekin
AAML
37
13
0
09 May 2022
Versatile Dueling Bandits: Best-of-both-World Analyses for Online
  Learning from Preferences
Versatile Dueling Bandits: Best-of-both-World Analyses for Online Learning from Preferences
Aadirupa Saha
Pierre Gaillard
38
8
0
14 Feb 2022
Efficient Action Poisoning Attacks on Linear Contextual Bandits
Efficient Action Poisoning Attacks on Linear Contextual Bandits
Guanlin Liu
Lifeng Lai
AAML
41
4
0
10 Dec 2021
One More Step Towards Reality: Cooperative Bandits with Imperfect
  Communication
One More Step Towards Reality: Cooperative Bandits with Imperfect Communication
Udari Madhushani
Abhimanyu Dubey
Naomi Ehrich Leonard
Alex Pentland
28
25
0
24 Nov 2021
Mean-based Best Arm Identification in Stochastic Bandits under Reward
  Contamination
Mean-based Best Arm Identification in Stochastic Bandits under Reward Contamination
Arpan Mukherjee
A. Tajer
Pin-Yu Chen
Payel Das
AAML
FedML
34
9
0
14 Nov 2021
Linear Contextual Bandits with Adversarial Corruptions
Linear Contextual Bandits with Adversarial Corruptions
Heyang Zhao
Dongruo Zhou
Quanquan Gu
AAML
36
24
0
25 Oct 2021
When Are Linear Stochastic Bandits Attackable?
When Are Linear Stochastic Bandits Attackable?
Huazheng Wang
Haifeng Xu
Hongning Wang
AAML
37
10
0
18 Oct 2021
On Optimal Robustness to Adversarial Corruption in Online Decision
  Problems
On Optimal Robustness to Adversarial Corruption in Online Decision Problems
Shinji Ito
42
22
0
22 Sep 2021
Bandit Algorithms for Precision Medicine
Bandit Algorithms for Precision Medicine
Yangyi Lu
Ziping Xu
Ambuj Tewari
66
11
0
10 Aug 2021
Bayesian decision-making under misspecified priors with applications to
  meta-learning
Bayesian decision-making under misspecified priors with applications to meta-learning
Max Simchowitz
Christopher Tosh
A. Krishnamurthy
Daniel J. Hsu
Thodoris Lykouris
Miroslav Dudík
Robert Schapire
40
49
0
03 Jul 2021
Cooperative Stochastic Multi-agent Multi-armed Bandits Robust to
  Adversarial Corruptions
Cooperative Stochastic Multi-agent Multi-armed Bandits Robust to Adversarial Corruptions
Junyan Liu
Shuai Li
Dapeng Li
23
6
0
08 Jun 2021
The best of both worlds: stochastic and adversarial episodic MDPs with
  unknown transition
The best of both worlds: stochastic and adversarial episodic MDPs with unknown transition
Tiancheng Jin
Longbo Huang
Haipeng Luo
27
40
0
08 Jun 2021
Robust Stochastic Linear Contextual Bandits Under Adversarial Attacks
Robust Stochastic Linear Contextual Bandits Under Adversarial Attacks
Qin Ding
Cho-Jui Hsieh
James Sharpnack
AAML
26
32
0
05 Jun 2021
Improved Analysis of the Tsallis-INF Algorithm in Stochastically
  Constrained Adversarial Bandits and Stochastic Bandits with Adversarial
  Corruptions
Improved Analysis of the Tsallis-INF Algorithm in Stochastically Constrained Adversarial Bandits and Stochastic Bandits with Adversarial Corruptions
Saeed Masoudian
Yevgeny Seldin
22
14
0
23 Mar 2021
Multiplicative Reweighting for Robust Neural Network Optimization
Multiplicative Reweighting for Robust Neural Network Optimization
Noga Bar
Tomer Koren
Raja Giryes
OOD
NoLa
18
9
0
24 Feb 2021
Improved Corruption Robust Algorithms for Episodic Reinforcement
  Learning
Improved Corruption Robust Algorithms for Episodic Reinforcement Learning
Yifang Chen
S. Du
Kevin G. Jamieson
24
22
0
13 Feb 2021
Robust Policy Gradient against Strong Data Corruption
Robust Policy Gradient against Strong Data Corruption
Xuezhou Zhang
Yiding Chen
Xiaojin Zhu
Wen Sun
AAML
40
37
0
11 Feb 2021
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Kiarash Banihashem
Adish Singla
Goran Radanović
AAML
37
26
0
10 Feb 2021
The Best of Many Worlds: Dual Mirror Descent for Online Allocation
  Problems
The Best of Many Worlds: Dual Mirror Descent for Online Allocation Problems
S. Balseiro
Haihao Lu
Vahab Mirrokni
27
101
0
18 Nov 2020
Robust Multi-Agent Multi-Armed Bandits
Robust Multi-Agent Multi-Armed Bandits
Daniel Vial
Sanjay Shakkottai
R. Srikant
19
36
0
07 Jul 2020
Bandits with adversarial scaling
Bandits with adversarial scaling
Thodoris Lykouris
Vahab Mirrokni
R. Leme
14
14
0
04 Mar 2020
Corruption-Tolerant Gaussian Process Bandit Optimization
Corruption-Tolerant Gaussian Process Bandit Optimization
Ilija Bogunovic
Andreas Krause
Jonathan Scarlett
35
51
0
04 Mar 2020
Robust Stochastic Bandit Algorithms under Probabilistic Unbounded
  Adversarial Attack
Robust Stochastic Bandit Algorithms under Probabilistic Unbounded Adversarial Attack
Ziwei Guan
Kaiyi Ji
Donald J. Bucci
Timothy Y. Hu
J. Palombo
Michael J. Liston
Yingbin Liang
AAML
29
27
0
17 Feb 2020
Nearly Optimal Algorithms for Piecewise-Stationary Cascading Bandits
Nearly Optimal Algorithms for Piecewise-Stationary Cascading Bandits
Lingda Wang
Huozhi Zhou
Bingcong Li
Lav Varshney
Zhizhen Zhao
25
6
0
12 Sep 2019
The Adversarial Robustness of Sampling
The Adversarial Robustness of Sampling
Omri Ben-Eliezer
E. Yogev
TTA
AAML
26
45
0
26 Jun 2019
Data Poisoning Attacks on Stochastic Bandits
Data Poisoning Attacks on Stochastic Bandits
Fang Liu
Ness B. Shroff
AAML
23
98
0
16 May 2019
Better Algorithms for Stochastic Bandits with Adversarial Corruptions
Better Algorithms for Stochastic Bandits with Adversarial Corruptions
Anupam Gupta
Tomer Koren
Kunal Talwar
AAML
8
151
0
22 Feb 2019
Bandits with Temporal Stochastic Constraints
Bandits with Temporal Stochastic Constraints
Priyank Agrawal
Theja Tulabandhula
22
0
0
22 Nov 2018
Unifying the stochastic and the adversarial Bandits with Knapsack
Unifying the stochastic and the adversarial Bandits with Knapsack
A. Rangi
M. Franceschetti
Long Tran-Thanh
18
26
0
23 Oct 2018
12
Next