Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2208.13663
Cited By
Understanding the Limits of Poisoning Attacks in Episodic Reinforcement Learning
29 August 2022
A. Rangi
Haifeng Xu
Long Tran-Thanh
M. Franceschetti
AAML
OffRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Understanding the Limits of Poisoning Attacks in Episodic Reinforcement Learning"
5 / 5 papers shown
Title
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents
Ethan Rathbun
Christopher Amato
Alina Oprea
OffRL
AAML
46
4
0
30 May 2024
Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic Shortest Path
Qiwei Di
Jiafan He
Dongruo Zhou
Quanquan Gu
33
2
0
14 Feb 2024
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks
Mohammad Mohammadi
Jonathan Nöther
Debmalya Mandal
Adish Singla
Goran Radanović
AAML
OffRL
35
9
0
27 Feb 2023
Reward Poisoning in Reinforcement Learning: Attacks Against Unknown Learners in Unknown Environments
Amin Rakhsha
Xuezhou Zhang
Xiaojin Zhu
Adish Singla
AAML
OffRL
44
37
0
16 Feb 2021
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Kiarash Banihashem
Adish Singla
Goran Radanović
AAML
32
26
0
10 Feb 2021
1