Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1607.02533
Cited By
v1
v2
v3
v4 (latest)
Adversarial examples in the physical world
8 July 2016
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Adversarial examples in the physical world"
50 / 2,769 papers shown
Title
Adversarial attacks hidden in plain sight
Jan Philip Göpfert
André Artelt
H. Wersing
Barbara Hammer
AAML
46
17
0
25 Feb 2019
Visualization, Discriminability and Applications of Interpretable Saak Features
Abinaya Manimaran
T. Ramanathan
Suya You
C.-C. Jay Kuo
FAtt
90
8
0
25 Feb 2019
Physical Adversarial Attacks Against End-to-End Autoencoder Communication Systems
Meysam Sadeghi
Erik G. Larsson
AAML
53
116
0
22 Feb 2019
Quantifying Perceptual Distortion of Adversarial Examples
Matt Jordan
N. Manoj
Surbhi Goel
A. Dimakis
68
39
0
21 Feb 2019
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Eric Wong
Frank R. Schmidt
J. Zico Kolter
AAML
95
211
0
21 Feb 2019
Perceptual Quality-preserving Black-Box Attack against Deep Learning Image Classifiers
Diego Gragnaniello
Francesco Marra
Giovanni Poggi
L. Verdoliva
AAML
35
30
0
20 Feb 2019
Reconstruction of 3-D Atomic Distortions from Electron Microscopy with Deep Learning
N. Laanait
Qian He
A. Borisevich
49
8
0
19 Feb 2019
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELM
AAML
147
905
0
18 Feb 2019
AuxBlocks: Defense Adversarial Example via Auxiliary Blocks
Yueyao Yu
Pengfei Yu
Wenye Li
AAML
18
6
0
18 Feb 2019
Mitigation of Adversarial Examples in RF Deep Classifiers Utilizing AutoEncoder Pre-training
S. Kokalj-Filipovic
Rob Miller
Nicholas Chang
Chi Leung Lau
AAML
54
41
0
16 Feb 2019
Adversarial Examples in RF Deep Learning: Detection of the Attack and its Physical Robustness
S. Kokalj-Filipovic
Rob Miller
AAML
60
31
0
16 Feb 2019
DeepFault: Fault Localization for Deep Neural Networks
Hasan Ferit Eniser
Simos Gerasimou
A. Sen
AAML
81
88
0
15 Feb 2019
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
Kevin Roth
Yannic Kilcher
Thomas Hofmann
AAML
80
176
0
13 Feb 2019
Towards a Robust Deep Neural Network in Texts: A Survey
Wenqi Wang
Benxiao Tang
Run Wang
Lina Wang
Aoshuang Ye
AAML
99
39
0
12 Feb 2019
Yes, we GAN: Applying Adversarial Techniques for Autonomous Driving
Michal Uřičář
P. Krízek
David Hurych
Ibrahim Sobh
S. Yogamani
Patrick Denny
GAN
99
58
0
09 Feb 2019
When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks
Chao-Han Huck Yang
Yi-Chieh Liu
Pin-Yu Chen
Xiaoli Ma
Y. Tsai
BDL
AAML
CML
80
21
0
09 Feb 2019
Discretization based Solutions for Secure Machine Learning against Adversarial Attacks
Priyadarshini Panda
I. Chakraborty
Kaushik Roy
AAML
62
40
0
08 Feb 2019
Understanding the One-Pixel Attack: Propagation Maps and Locality Analysis
Danilo Vasconcellos Vargas
Jiawei Su
FAtt
AAML
41
38
0
08 Feb 2019
Robustness Of Saak Transform Against Adversarial Attacks
T. Ramanathan
Abinaya Manimaran
Suya You
C.-C. Jay Kuo
76
5
0
07 Feb 2019
Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples
Derui Wang
Chaoran Li
S. Wen
Qing-Long Han
Surya Nepal
Xiangyu Zhang
Yang Xiang
AAML
75
40
0
06 Feb 2019
Fooling Neural Network Interpretations via Adversarial Model Manipulation
Juyeon Heo
Sunghwan Joo
Taesup Moon
AAML
FAtt
126
206
0
06 Feb 2019
Analyzing and Improving Representations with the Soft Nearest Neighbor Loss
Nicholas Frosst
Nicolas Papernot
Geoffrey E. Hinton
85
160
0
05 Feb 2019
Theoretical evidence for adversarial robustness through randomization
Rafael Pinot
Laurent Meunier
Alexandre Araujo
H. Kashima
Florian Yger
Cédric Gouy-Pailler
Jamal Atif
AAML
110
83
0
04 Feb 2019
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks
Alberto Marchisio
Giorgio Nanfa
Faiq Khalid
Muhammad Abdullah Hanif
Maurizio Martina
Mohamed Bennai
AAML
55
7
0
04 Feb 2019
Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks
S. Saralajew
Lars Holdijk
Maike Rees
T. Villmann
OOD
49
19
0
01 Feb 2019
Robustness Certificates Against Adversarial Examples for ReLU Networks
Sahil Singla
Soheil Feizi
AAML
68
21
0
01 Feb 2019
Adaptive Gradient for Adversarial Perturbations Generation
Yatie Xiao
Chi-Man Pun
ODL
69
10
0
01 Feb 2019
A New Family of Neural Networks Provably Resistant to Adversarial Attacks
Rakshit Agrawal
Luca de Alfaro
D. Helmbold
AAML
OOD
39
2
0
01 Feb 2019
Augmenting Model Robustness with Transformation-Invariant Attacks
Houpu Yao
Zhe Wang
Guangyu Nie
Yassine Mazboudi
Yezhou Yang
Yi Ren
AAML
OOD
31
3
0
31 Jan 2019
Adversarial Metric Attack and Defense for Person Re-identification
S. Bai
Yingwei Li
Yuyin Zhou
Qizhu Li
Philip Torr
AAML
100
17
0
30 Jan 2019
RED-Attack: Resource Efficient Decision based Attack for Machine Learning
Faiq Khalid
Hassan Ali
Muhammad Abdullah Hanif
Semeen Rehman
Rehan Ahmed
Mohamed Bennai
AAML
71
14
0
29 Jan 2019
CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule Networks
Alberto Marchisio
Giorgio Nanfa
Faiq Khalid
Muhammad Abdullah Hanif
Maurizio Martina
Mohamed Bennai
GAN
AAML
74
26
0
28 Jan 2019
Spectrum Data Poisoning with Adversarial Deep Learning
Yi Shi
T. Erpek
Y. Sagduyu
Jason H. Li
AAML
66
73
0
26 Jan 2019
Weighted-Sampling Audio Adversarial Example Attack
Xiaolei Liu
Xiaosong Zhang
Kun Wan
Qingxin Zhu
Yufei Ding
DiffM
AAML
51
36
0
26 Jan 2019
A Black-box Attack on Neural Networks Based on Swarm Evolutionary Algorithm
Xiaolei Liu
Yuheng Luo
Xiaosong Zhang
Qingxin Zhu
AAML
53
16
0
26 Jan 2019
Generative Adversarial Networks for Black-Box API Attacks with Limited Training Data
Yi Shi
Y. Sagduyu
Kemal Davaslioglu
Jason H. Li
AAML
58
29
0
25 Jan 2019
Improving Adversarial Robustness via Promoting Ensemble Diversity
Tianyu Pang
Kun Xu
Chao Du
Ning Chen
Jun Zhu
AAML
106
441
0
25 Jan 2019
Sitatapatra: Blocking the Transfer of Adversarial Samples
Ilia Shumailov
Xitong Gao
Yiren Zhao
Robert D. Mullins
Ross J. Anderson
Chengzhong Xu
AAML
GAN
64
14
0
23 Jan 2019
SirenAttack: Generating Adversarial Audio for End-to-End Acoustic Systems
Tianyu Du
S. Ji
Jinfeng Li
Qinchen Gu
Ting Wang
R. Beyah
AAML
88
130
0
23 Jan 2019
Programmable Neural Network Trojan for Pre-Trained Feature Extractor
Yu Ji
Zixin Liu
Xing Hu
Peiqi Wang
Youhui Zhang
AAML
74
18
0
23 Jan 2019
Generating Adversarial Perturbation with Root Mean Square Gradient
Yatie Xiao
Chi-Man Pun
Jizhe Zhou
GAN
33
1
0
13 Jan 2019
ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System
Huangxun Chen
Chenyu Huang
Qianyi Huang
Qian Zhang
Wei Wang
AAML
75
28
0
12 Jan 2019
Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification
L. G. Hafemann
R. Sabourin
Luiz Eduardo Soares de Oliveira
AAML
55
44
0
10 Jan 2019
Image Transformation can make Neural Networks more robust against Adversarial Examples
D. D. Thang
Toshihiro Matsui
AAML
25
10
0
10 Jan 2019
Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud Classifiers
Daniel Liu
Ronald Yu
Hao Su
3DPC
97
170
0
10 Jan 2019
Contamination Attacks and Mitigation in Multi-Party Machine Learning
Jamie Hayes
O. Ohrimenko
AAML
FedML
114
75
0
08 Jan 2019
Interpretable BoW Networks for Adversarial Example Detection
Krishna Kanth Nakka
Mathieu Salzmann
GAN
AAML
33
0
0
08 Jan 2019
Image Super-Resolution as a Defense Against Adversarial Attacks
Aamir Mustafa
Salman H. Khan
Munawar Hayat
Jianbing Shen
Ling Shao
AAML
SupR
100
176
0
07 Jan 2019
Adversarial Examples Versus Cloud-based Detectors: A Black-box Empirical Study
Xurong Li
S. Ji
Men Han
Juntao Ji
Zhenyu Ren
Yushan Liu
Chunming Wu
AAML
93
31
0
04 Jan 2019
Multi-Label Adversarial Perturbations
Qingquan Song
Haifeng Jin
Xiao Huang
Helen Zhou
AAML
63
37
0
02 Jan 2019
Previous
1
2
3
...
47
48
49
...
54
55
56
Next