ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1702.02284
  4. Cited By
Adversarial Attacks on Neural Network Policies

Adversarial Attacks on Neural Network Policies

8 February 2017
Sandy Huang
Nicolas Papernot
Ian Goodfellow
Yan Duan
Pieter Abbeel
    MLAUAAML
ArXiv (abs)PDFHTML

Papers citing "Adversarial Attacks on Neural Network Policies"

34 / 434 papers shown
Title
Adversarially Robust Generalization Requires More Data
Adversarially Robust Generalization Requires More Data
Ludwig Schmidt
Shibani Santurkar
Dimitris Tsipras
Kunal Talwar
Aleksander Madry
OODAAML
188
797
0
30 Apr 2018
A Study on Overfitting in Deep Reinforcement Learning
A Study on Overfitting in Deep Reinforcement Learning
Chiyuan Zhang
Oriol Vinyals
Rémi Munos
Samy Bengio
OffRLOnRL
61
391
0
18 Apr 2018
Invisible Mask: Practical Attacks on Face Recognition with Infrared
Invisible Mask: Practical Attacks on Face Recognition with Infrared
Zhe Zhou
Di Tang
Xiaofeng Wang
Weili Han
Xiangyu Liu
Kehuan Zhang
CVBMAAML
68
103
0
13 Mar 2018
Stochastic Activation Pruning for Robust Adversarial Defense
Stochastic Activation Pruning for Robust Adversarial Defense
Guneet Singh Dhillon
Kamyar Azizzadenesheli
Zachary Chase Lipton
Jeremy Bernstein
Jean Kossaifi
Aran Khanna
Anima Anandkumar
AAML
94
548
0
05 Mar 2018
Verifying Controllers Against Adversarial Examples with Bayesian
  Optimization
Verifying Controllers Against Adversarial Examples with Bayesian Optimization
Shromona Ghosh
Felix Berkenkamp
G. Ranade
S. Qadeer
Ashish Kapoor
AAML
96
45
0
23 Feb 2018
Shield: Fast, Practical Defense and Vaccination for Deep Learning using
  JPEG Compression
Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression
Nilaksh Das
Madhuri Shanbhogue
Shang-Tse Chen
Fred Hohman
Siwei Li
Li-Wei Chen
Michael E. Kounavis
Duen Horng Chau
FedMLAAML
85
228
0
19 Feb 2018
DARTS: Deceiving Autonomous Cars with Toxic Signs
DARTS: Deceiving Autonomous Cars with Toxic Signs
Chawin Sitawarin
A. Bhagoji
Arsalan Mosenia
M. Chiang
Prateek Mittal
AAML
117
236
0
18 Feb 2018
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
J. Uesato
Brendan O'Donoghue
Aaron van den Oord
Pushmeet Kohli
AAML
179
606
0
15 Feb 2018
Fooling OCR Systems with Adversarial Text Images
Fooling OCR Systems with Adversarial Text Images
Congzheng Song
Vitaly Shmatikov
AAML
61
51
0
15 Feb 2018
Query-Free Attacks on Industry-Grade Face Recognition Systems under
  Resource Constraints
Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints
Di Tang
Xiaofeng Wang
Kehuan Zhang
AAML
67
1
0
13 Feb 2018
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Nicholas Carlini
D. Wagner
AAML
101
1,083
0
05 Jan 2018
Deep Learning: A Critical Appraisal
Deep Learning: A Critical Appraisal
G. Marcus
HAIVLM
149
1,043
0
02 Jan 2018
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A
  Survey
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
Naveed Akhtar
Ajmal Mian
AAML
144
1,873
0
02 Jan 2018
Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger
Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger
Vahid Behzadan
Arslan Munir
AAML
95
67
0
23 Dec 2017
Adversarial Examples: Attacks and Defenses for Deep Learning
Adversarial Examples: Attacks and Defenses for Deep Learning
Xiaoyong Yuan
Pan He
Qile Zhu
Xiaolin Li
SILMAAML
131
1,628
0
19 Dec 2017
Robust Deep Reinforcement Learning with Adversarial Attacks
Robust Deep Reinforcement Learning with Adversarial Attacks
Anay Pattanaik
Zhenyi Tang
Shuijing Liu
Gautham Bommannan
Girish Chowdhary
OOD
80
308
0
11 Dec 2017
AI Safety Gridworlds
AI Safety Gridworlds
Jan Leike
Miljan Martic
Victoria Krakovna
Pedro A. Ortega
Tom Everitt
Andrew Lefrancq
Laurent Orseau
Shane Legg
140
255
0
27 Nov 2017
Hardening Quantum Machine Learning Against Adversaries
Hardening Quantum Machine Learning Against Adversaries
N. Wiebe
Ramnath Kumar
AAML
63
20
0
17 Nov 2017
Detecting Adversarial Attacks on Neural Network Policies with Visual
  Foresight
Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight
Yen-Chen Lin
Ming-Yuan Liu
Min Sun
Jia-Bin Huang
AAML
93
48
0
02 Oct 2017
Fooling Vision and Language Models Despite Localization and Attention
  Mechanism
Fooling Vision and Language Models Despite Localization and Attention Mechanism
Xiaojun Xu
Xinyun Chen
Chang-rui Liu
Anna Rohrbach
Trevor Darrell
Basel Alomair
AAML
99
41
0
25 Sep 2017
How intelligent are convolutional neural networks?
How intelligent are convolutional neural networks?
Zhennan Yan
Xiangmin Zhou
64
11
0
18 Sep 2017
Can Deep Neural Networks Match the Related Objects?: A Survey on
  ImageNet-trained Classification Models
Can Deep Neural Networks Match the Related Objects?: A Survey on ImageNet-trained Classification Models
Han S. Lee
Heechul Jung
Alex A. Agarwal
Junmo Kim
85
6
0
12 Sep 2017
Deep Packet: A Novel Approach For Encrypted Traffic Classification Using
  Deep Learning
Deep Packet: A Novel Approach For Encrypted Traffic Classification Using Deep Learning
M. Lotfollahi
Ramin Shirali Hossein Zade
Mahdi Jafari Siavoshani
Mohammdsadegh Saberian
55
836
0
08 Sep 2017
Learning Universal Adversarial Perturbations with Generative Models
Learning Universal Adversarial Perturbations with Generative Models
Jamie Hayes
G. Danezis
AAML
84
54
0
17 Aug 2017
Robust Physical-World Attacks on Deep Learning Models
Robust Physical-World Attacks on Deep Learning Models
Kevin Eykholt
Ivan Evtimov
Earlence Fernandes
Yue Liu
Amir Rahmati
Chaowei Xiao
Atul Prakash
Tadayoshi Kohno
Basel Alomair
AAML
128
595
0
27 Jul 2017
Towards Crafting Text Adversarial Samples
Towards Crafting Text Adversarial Samples
Suranjana Samanta
S. Mehta
AAML
85
222
0
10 Jul 2017
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong
Warren He
James Wei
Xinyun Chen
Nicholas Carlini
Basel Alomair
AAML
111
242
0
15 Jun 2017
Certified Defenses for Data Poisoning Attacks
Certified Defenses for Data Poisoning Attacks
Jacob Steinhardt
Pang Wei Koh
Percy Liang
AAML
148
762
0
09 Jun 2017
Delving into adversarial attacks on deep policies
Delving into adversarial attacks on deep policies
Jernej Kos
Basel Alomair
AAML
72
228
0
18 May 2017
Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with
  JPEG Compression
Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression
Nilaksh Das
Madhuri Shanbhogue
Shang-Tse Chen
Fred Hohman
Li-Wei Chen
Michael E. Kounavis
Duen Horng Chau
AAML
89
307
0
08 May 2017
A General Safety Framework for Learning-Based Control in Uncertain
  Robotic Systems
A General Safety Framework for Learning-Based Control in Uncertain Robotic Systems
J. F. Fisac
Anayo K. Akametalu
Melanie Zeilinger
Shahab Kaynama
J. Gillula
Claire Tomlin
76
498
0
03 May 2017
The Space of Transferable Adversarial Examples
The Space of Transferable Adversarial Examples
Florian Tramèr
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAMLSILM
113
558
0
11 Apr 2017
Tactics of Adversarial Attack on Deep Reinforcement Learning Agents
Tactics of Adversarial Attack on Deep Reinforcement Learning Agents
Yen-Chen Lin
Zhang-Wei Hong
Yuan-Hong Liao
Meng-Li Shih
Ming-Yuan Liu
Min Sun
AAML
130
418
0
08 Mar 2017
Deep Reinforcement Learning: An Overview
Deep Reinforcement Learning: An Overview
Yuxi Li
OffRLVLM
309
1,549
0
25 Jan 2017
Previous
123456789