Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.04471
Cited By
Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning
9 October 2021
Guanlin Liu
Lifeng Lai
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning"
10 / 10 papers shown
Title
Reward Poisoning in Reinforcement Learning: Attacks Against Unknown Learners in Unknown Environments
Amin Rakhsha
Xuezhou Zhang
Xiaojin Zhu
Adish Singla
AAML
OffRL
54
37
0
16 Feb 2021
On the Adversarial Robustness of LASSO Based Feature Selection
Fuwei Li
Lifeng Lai
Shuguang Cui
AAML
45
19
0
20 Oct 2020
Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown Dynamics
Yanchao Sun
Da Huo
Furong Huang
AAML
OffRL
OnRL
57
49
0
02 Sep 2020
Q
Q
Q
-learning with Logarithmic Regret
Kunhe Yang
Lin F. Yang
S. Du
52
59
0
16 Jun 2020
Unlabeled Data Improves Adversarial Robustness
Y. Carmon
Aditi Raghunathan
Ludwig Schmidt
Percy Liang
John C. Duchi
90
752
0
31 May 2019
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
94
2,018
0
08 Feb 2019
Scalable End-to-End Autonomous Vehicle Testing via Rare-event Simulation
Matthew O'Kelly
Aman Sinha
Hongseok Namkoong
John C. Duchi
Russ Tedrake
51
217
0
31 Oct 2018
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
443
3,124
0
04 Nov 2016
Universal adversarial perturbations
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
108
2,520
0
26 Oct 2016
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
157
14,831
1
21 Dec 2013
1