Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1511.04599
Cited By
v1
v2
v3 (latest)
DeepFool: a simple and accurate method to fool deep neural networks
14 November 2015
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"DeepFool: a simple and accurate method to fool deep neural networks"
50 / 2,298 papers shown
Title
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
219
2,057
0
08 Feb 2019
Robustness Of Saak Transform Against Adversarial Attacks
T. Ramanathan
Abinaya Manimaran
Suya You
C.-C. Jay Kuo
76
5
0
07 Feb 2019
Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples
Derui Wang
Chaoran Li
S. Wen
Qing-Long Han
Surya Nepal
Xiangyu Zhang
Yang Xiang
AAML
75
40
0
06 Feb 2019
Theoretical evidence for adversarial robustness through randomization
Rafael Pinot
Laurent Meunier
Alexandre Araujo
H. Kashima
Florian Yger
Cédric Gouy-Pailler
Jamal Atif
AAML
110
83
0
04 Feb 2019
Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks
S. Saralajew
Lars Holdijk
Maike Rees
T. Villmann
OOD
49
19
0
01 Feb 2019
Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation
Sahil Singla
Eric Wallace
Shi Feng
Soheil Feizi
FAtt
71
59
0
01 Feb 2019
Robustness Certificates Against Adversarial Examples for ReLU Networks
Sahil Singla
Soheil Feizi
AAML
68
21
0
01 Feb 2019
Training Artificial Neural Networks by Generalized Likelihood Ratio Method: Exploring Brain-like Learning to Improve Robustness
Li Xiao
Yijie Peng
J. Hong
Zewu Ke
Shuhuai Yang
8
0
0
31 Jan 2019
Augmenting Model Robustness with Transformation-Invariant Attacks
Houpu Yao
Zhe Wang
Guangyu Nie
Yassine Mazboudi
Yezhou Yang
Yi Ren
AAML
OOD
31
3
0
31 Jan 2019
Who's Afraid of Adversarial Queries? The Impact of Image Modifications on Content-based Image Retrieval
Zhuoran Liu
Zhengyu Zhao
Martha Larson
AAML
79
43
0
29 Jan 2019
A Black-box Attack on Neural Networks Based on Swarm Evolutionary Algorithm
Xiaolei Liu
Yuheng Luo
Xiaosong Zhang
Qingxin Zhu
AAML
53
16
0
26 Jan 2019
Generative Adversarial Networks for Black-Box API Attacks with Limited Training Data
Yi Shi
Y. Sagduyu
Kemal Davaslioglu
Jason H. Li
AAML
58
29
0
25 Jan 2019
Theoretically Principled Trade-off between Robustness and Accuracy
Hongyang R. Zhang
Yaodong Yu
Jiantao Jiao
Eric Xing
L. Ghaoui
Michael I. Jordan
251
2,566
0
24 Jan 2019
Sitatapatra: Blocking the Transfer of Adversarial Samples
Ilia Shumailov
Xitong Gao
Yiren Zhao
Robert D. Mullins
Ross J. Anderson
Chengzhong Xu
AAML
GAN
64
14
0
23 Jan 2019
Sensitivity Analysis of Deep Neural Networks
Hai Shu
Hongtu Zhu
AAML
46
53
0
22 Jan 2019
Universal Rules for Fooling Deep Neural Networks based Text Classification
Di Li
Danilo Vasconcellos Vargas
Kouichi Sakurai
AAML
46
11
0
22 Jan 2019
Perception-in-the-Loop Adversarial Examples
Mahmoud Salamati
Sadegh Soudjani
R. Majumdar
AAML
20
2
0
21 Jan 2019
Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey
W. Zhang
Quan Z. Sheng
A. Alhazmi
Chenliang Li
AAML
125
57
0
21 Jan 2019
Generating Adversarial Perturbation with Root Mean Square Gradient
Yatie Xiao
Chi-Man Pun
Jizhe Zhou
GAN
33
1
0
13 Jan 2019
ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System
Huangxun Chen
Chenyu Huang
Qianyi Huang
Qian Zhang
Wei Wang
AAML
75
28
0
12 Jan 2019
Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud Classifiers
Daniel Liu
Ronald Yu
Hao Su
3DPC
97
170
0
10 Jan 2019
Thinking Outside the Pool: Active Training Image Creation for Relative Attributes
Aron Yu
Kristen Grauman
51
23
0
08 Jan 2019
Contamination Attacks and Mitigation in Multi-Party Machine Learning
Jamie Hayes
O. Ohrimenko
AAML
FedML
114
75
0
08 Jan 2019
Interpretable BoW Networks for Adversarial Example Detection
Krishna Kanth Nakka
Mathieu Salzmann
GAN
AAML
33
0
0
08 Jan 2019
Ten ways to fool the masses with machine learning
F. Minhas
Amina Asif
Asa Ben-Hur
FedML
HAI
68
5
0
07 Jan 2019
Image Super-Resolution as a Defense Against Adversarial Attacks
Aamir Mustafa
Salman H. Khan
Munawar Hayat
Jianbing Shen
Ling Shao
AAML
SupR
100
176
0
07 Jan 2019
Adversarial Examples Versus Cloud-based Detectors: A Black-box Empirical Study
Xurong Li
S. Ji
Men Han
Juntao Ji
Zhenyu Ren
Yushan Liu
Chunming Wu
AAML
93
31
0
04 Jan 2019
Multi-Label Adversarial Perturbations
Qingquan Song
Haifeng Jin
Xiao Huang
Helen Zhou
AAML
63
37
0
02 Jan 2019
A Noise-Sensitivity-Analysis-Based Test Prioritization Technique for Deep Neural Networks
Long Zhang
Xuechao Sun
Yong Li
Zhenyu Zhang
AAML
53
22
0
01 Jan 2019
DeepBillboard: Systematic Physical-World Testing of Autonomous Driving Systems
Husheng Zhou
Wei Li
Yuankun Zhu
Yuqun Zhang
Bei Yu
Lingming Zhang
Cong Liu
AAML
85
179
0
27 Dec 2018
End-to-End Latent Fingerprint Search
Kai Cao
Dinh-Luan Nguyen
Cori Tymoszek
Anil K. Jain
57
24
0
26 Dec 2018
A Multiversion Programming Inspired Approach to Detecting Audio Adversarial Examples
Qiang Zeng
Jianhai Su
Chenglong Fu
Golam Kayas
Lannan Luo
AAML
55
46
0
26 Dec 2018
A Data-driven Adversarial Examples Recognition Framework via Adversarial Feature Genome
Li Chen
Qi Li
Jiawei Zhu
Jian Peng
Haifeng Li
AAML
57
3
0
25 Dec 2018
PPD: Permutation Phase Defense Against Adversarial Examples in Deep Learning
Mehdi Jafarnia-Jahromi
Tasmin Chowdhury
Hsin-Tai Wu
S. Mukherjee
AAML
47
4
0
25 Dec 2018
DUP-Net: Denoiser and Upsampler Network for 3D Adversarial Point Clouds Defense
Hang Zhou
Kejiang Chen
Weiming Zhang
Han Fang
Wenbo Zhou
Nenghai Yu
3DPC
69
8
0
25 Dec 2018
Towards resilient machine learning for ransomware detection
Li-Wei Chen
Chih-Yuan Yang
Anindya Paul
R. Sahita
AAML
36
22
0
21 Dec 2018
A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability
Xiaowei Huang
Daniel Kroening
Wenjie Ruan
Marta Kwiatkowska
Youcheng Sun
Emese Thamo
Min Wu
Xinping Yi
AAML
132
51
0
18 Dec 2018
Spartan Networks: Self-Feature-Squeezing Neural Networks for increased robustness in adversarial settings
François Menet
Paul Berthier
José M. Fernandez
M. Gagnon
AAML
27
10
0
17 Dec 2018
Designing Adversarially Resilient Classifiers using Resilient Feature Engineering
Kevin Eykholt
A. Prakash
AAML
60
4
0
17 Dec 2018
Defense-VAE: A Fast and Accurate Defense against Adversarial Attacks
Xiang Li
Shihao Ji
AAML
75
26
0
17 Dec 2018
Trust Region Based Adversarial Attack on Neural Networks
Z. Yao
A. Gholami
Peng Xu
Kurt Keutzer
Michael W. Mahoney
AAML
64
54
0
16 Dec 2018
Perturbation Analysis of Learning Algorithms: A Unifying Perspective on Generation of Adversarial Examples
E. Balda
Arash Behboodi
R. Mathar
AAML
30
5
0
15 Dec 2018
Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing
Jingyi Wang
Guoliang Dong
Jun Sun
Xinyu Wang
Peixin Zhang
AAML
78
191
0
14 Dec 2018
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
Matthias Hein
Maksym Andriushchenko
Julian Bitterwolf
OODD
187
559
0
13 Dec 2018
TextBugger: Generating Adversarial Text Against Real-world Applications
Jinfeng Li
S. Ji
Tianyu Du
Bo Li
Ting Wang
SILM
AAML
222
750
0
13 Dec 2018
Thwarting Adversarial Examples: An
L
0
L_0
L
0
-RobustSparse Fourier Transform
Mitali Bafna
Jack Murtagh
Nikhil Vyas
AAML
69
48
0
12 Dec 2018
Adversarial Framing for Image and Video Classification
Konrad Zolna
Michal Zajac
Negar Rostamzadeh
Pedro H. O. Pinheiro
AAML
106
61
0
11 Dec 2018
On the Security of Randomized Defenses Against Adversarial Samples
K. Sharad
G. Marson
H. Truong
Ghassan O. Karame
AAML
47
1
0
11 Dec 2018
Defending Against Universal Perturbations With Shared Adversarial Training
Chaithanya Kumar Mummadi
Thomas Brox
J. H. Metzen
AAML
84
60
0
10 Dec 2018
Detecting Adversarial Examples in Convolutional Neural Networks
Stefanos Pertigkiozoglou
Petros Maragos
GAN
AAML
75
16
0
08 Dec 2018
Previous
1
2
3
...
38
39
40
...
44
45
46
Next