Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1805.12487
Cited By
Sequential Attacks on Agents for Long-Term Adversarial Goals
31 May 2018
E. Tretschk
Seong Joon Oh
Mario Fritz
OnRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Sequential Attacks on Agents for Long-Term Adversarial Goals"
15 / 15 papers shown
Title
Robust Deep Reinforcement Learning against Adversarial Behavior Manipulation
Shojiro Yamabe
Kazuto Fukuchi
Jun Sakuma
AAML
63
0
0
06 Jun 2024
Enhancing the Robustness of QMIX against State-adversarial Attacks
Weiran Guo
Guanjun Liu
Ziyuan Zhou
Ling Wang
Jiacun Wang
AAML
32
7
0
03 Jul 2023
SoK: Adversarial Machine Learning Attacks and Defences in Multi-Agent Reinforcement Learning
Maxwell Standen
Junae Kim
Claudia Szabo
AAML
32
5
0
11 Jan 2023
A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Ambra Demontis
Maura Pintor
Luca Demetrio
Kathrin Grosse
Hsiao-Ying Lin
Chengfang Fang
Battista Biggio
Fabio Roli
AAML
42
4
0
12 Dec 2022
Emerging Threats in Deep Learning-Based Autonomous Driving: A Comprehensive Survey
Huiyun Cao
Wenlong Zou
Yinkun Wang
Ting Song
Mengjun Liu
AAML
54
4
0
19 Oct 2022
A Transferable and Automatic Tuning of Deep Reinforcement Learning for Cost Effective Phishing Detection
Orel Lavie
A. Shabtai
Gilad Katz
AAML
OffRL
30
1
0
19 Sep 2022
Trustworthy Reinforcement Learning Against Intrinsic Vulnerabilities: Robustness, Safety, and Generalizability
Mengdi Xu
Zuxin Liu
Peide Huang
Wenhao Ding
Zhepeng Cen
Bo-wen Li
Ding Zhao
74
45
0
16 Sep 2022
Deep-Attack over the Deep Reinforcement Learning
Yang Li
Quanbiao Pan
Min Zhang
AAML
19
13
0
02 May 2022
Resilient Machine Learning for Networked Cyber Physical Systems: A Survey for Machine Learning Security to Securing Machine Learning for CPS
Felix O. Olowononi
D. Rawat
Chunmei Liu
34
132
0
14 Feb 2021
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Kiarash Banihashem
Adish Singla
Goran Radanović
AAML
32
26
0
10 Feb 2021
Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks
Amin Rakhsha
Goran Radanović
R. Devidze
Xiaojin Zhu
Adish Singla
AAML
OffRL
28
29
0
21 Nov 2020
Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning
Amin Rakhsha
Goran Radanović
R. Devidze
Xiaojin Zhu
Adish Singla
AAML
OffRL
9
120
0
28 Mar 2020
Learning to Cope with Adversarial Attacks
Xian Yeow Lee
Aaron J. Havens
Girish Chowdhary
S. Sarkar
AAML
33
5
0
28 Jun 2019
Body Shape Privacy in Images: Understanding Privacy and Preventing Automatic Shape Extraction
Hosnieh Sattar
Katharina Krombholz
Gerard Pons-Moll
Mario Fritz
3DH
25
3
0
27 May 2019
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
287
5,837
0
08 Jul 2016
1