ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.04599
  4. Cited By
DeepFool: a simple and accurate method to fool deep neural networks
v1v2v3 (latest)

DeepFool: a simple and accurate method to fool deep neural networks

14 November 2015
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
    AAML
ArXiv (abs)PDFHTML

Papers citing "DeepFool: a simple and accurate method to fool deep neural networks"

50 / 2,298 papers shown
Title
ConvNets and ImageNet Beyond Accuracy: Understanding Mistakes and
  Uncovering Biases
ConvNets and ImageNet Beyond Accuracy: Understanding Mistakes and Uncovering Biases
Pierre Stock
Moustapha Cissé
FaML
94
46
0
30 Nov 2017
On the Robustness of Semantic Segmentation Models to Adversarial Attacks
On the Robustness of Semantic Segmentation Models to Adversarial Attacks
Anurag Arnab
O. Mikšík
Philip Torr
AAML
115
308
0
27 Nov 2017
Butterfly Effect: Bidirectional Control of Classification Performance by
  Small Additive Perturbation
Butterfly Effect: Bidirectional Control of Classification Performance by Small Additive Perturbation
Y. Yoo
Seonguk Park
Junyoung Choi
Sangdoo Yun
Nojun Kwak
AAML
50
4
0
27 Nov 2017
Geometric robustness of deep networks: analysis and improvement
Geometric robustness of deep networks: analysis and improvement
Can Kanbak
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
OODAAML
119
131
0
24 Nov 2017
Adversarial Phenomenon in the Eyes of Bayesian Deep Learning
Adversarial Phenomenon in the Eyes of Bayesian Deep Learning
Ambrish Rawat
Martin Wistuba
Maria-Irina Nicolae
BDLAAML
64
39
0
22 Nov 2017
Reinforcing Adversarial Robustness using Model Confidence Induced by
  Adversarial Training
Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training
Xi Wu
Uyeong Jang
Jiefeng Chen
Lingjiao Chen
S. Jha
AAML
94
21
0
21 Nov 2017
Adversarial Attacks Beyond the Image Space
Adversarial Attacks Beyond the Image Space
Fangyin Wei
Chenxi Liu
Yu-Siang Wang
Weichao Qiu
Lingxi Xie
Yu-Wing Tai
Chi-Keung Tang
Alan Yuille
AAML
126
150
0
20 Nov 2017
How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models
Kathrin Grosse
David Pfaff
M. Smith
Michael Backes
AAML
82
9
0
17 Nov 2017
Enhanced Attacks on Defensively Distilled Deep Neural Networks
Enhanced Attacks on Defensively Distilled Deep Neural Networks
Yujia Liu
Weiming Zhang
Shaohua Li
Nenghai Yu
AAML
76
6
0
16 Nov 2017
Defense against Universal Adversarial Perturbations
Defense against Universal Adversarial Perturbations
Naveed Akhtar
Jian Liu
Ajmal Mian
AAML
103
208
0
16 Nov 2017
Mitigating Adversarial Effects Through Randomization
Mitigating Adversarial Effects Through Randomization
Cihang Xie
Jianyu Wang
Zhishuai Zhang
Zhou Ren
Alan Yuille
AAML
164
1,068
0
06 Nov 2017
HyperNetworks with statistical filtering for defending adversarial
  examples
HyperNetworks with statistical filtering for defending adversarial examples
Zhun Sun
Mete Ozay
Takayuki Okatani
AAML
54
16
0
06 Nov 2017
Attacking Binarized Neural Networks
Attacking Binarized Neural Networks
A. Galloway
Graham W. Taylor
M. Moussa
MQAAML
81
106
0
01 Nov 2017
Countering Adversarial Images using Input Transformations
Countering Adversarial Images using Input Transformations
Chuan Guo
Mayank Rana
Moustapha Cissé
Laurens van der Maaten
AAML
149
1,409
0
31 Oct 2017
PixelDefend: Leveraging Generative Models to Understand and Defend
  against Adversarial Examples
PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples
Yang Song
Taesup Kim
Sebastian Nowozin
Stefano Ermon
Nate Kushman
AAML
141
791
0
30 Oct 2017
A Saak Transform Approach to Efficient, Scalable and Robust Handwritten
  Digits Recognition
A Saak Transform Approach to Efficient, Scalable and Robust Handwritten Digits Recognition
Yueru Chen
Zhuwei Xu
Shanshan Cai
Yujian Lang
C.-C. Jay Kuo
66
35
0
29 Oct 2017
Certifying Some Distributional Robustness with Principled Adversarial
  Training
Certifying Some Distributional Robustness with Principled Adversarial Training
Aman Sinha
Hongseok Namkoong
Riccardo Volpi
John C. Duchi
OOD
145
866
0
29 Oct 2017
Interpretation of Neural Networks is Fragile
Interpretation of Neural Networks is Fragile
Amirata Ghorbani
Abubakar Abid
James Zou
FAttAAML
153
874
0
29 Oct 2017
One pixel attack for fooling deep neural networks
One pixel attack for fooling deep neural networks
Jiawei Su
Danilo Vasconcellos Vargas
Kouichi Sakurai
AAML
220
2,331
0
24 Oct 2017
On Data-Driven Saak Transform
On Data-Driven Saak Transform
C.-C. Jay Kuo
Yueru Chen
AI4TS
87
94
0
11 Oct 2017
Standard detectors aren't (currently) fooled by physical adversarial
  stop signs
Standard detectors aren't (currently) fooled by physical adversarial stop signs
Jiajun Lu
Hussein Sibai
Evan Fabry
David A. Forsyth
AAML
101
59
0
09 Oct 2017
Detecting Adversarial Attacks on Neural Network Policies with Visual
  Foresight
Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight
Yen-Chen Lin
Ming-Yuan Liu
Min Sun
Jia-Bin Huang
AAML
96
48
0
02 Oct 2017
DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in
  Neural Networks
DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks
D. Gopinath
Guy Katz
C. Păsăreanu
Clark W. Barrett
AAML
141
87
0
02 Oct 2017
Provably Minimally-Distorted Adversarial Examples
Provably Minimally-Distorted Adversarial Examples
Nicholas Carlini
Guy Katz
Clark W. Barrett
D. Dill
AAML
105
89
0
29 Sep 2017
Verifying Properties of Binarized Deep Neural Networks
Verifying Properties of Binarized Deep Neural Networks
Nina Narodytska
S. Kasiviswanathan
L. Ryzhyk
Shmuel Sagiv
T. Walsh
AAML
117
217
0
19 Sep 2017
Mitigating Evasion Attacks to Deep Neural Networks via Region-based
  Classification
Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
85
212
0
17 Sep 2017
A Learning and Masking Approach to Secure Learning
A Learning and Masking Approach to Secure Learning
Linh Nguyen
Sky Wang
Arunesh Sinha
AAML
63
2
0
13 Sep 2017
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial
  Examples
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Pin-Yu Chen
Yash Sharma
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
AAML
80
641
0
13 Sep 2017
Art of singular vectors and universal adversarial perturbations
Art of singular vectors and universal adversarial perturbations
Valentin Khrulkov
Ivan Oseledets
AAML
78
132
0
11 Sep 2017
Ensemble Methods as a Defense to Adversarial Perturbations Against Deep
  Neural Networks
Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks
Thilo Strauss
Markus Hanselmann
Andrej Junginger
Holger Ulmer
AAML
93
137
0
11 Sep 2017
DeepFense: Online Accelerated Defense Against Adversarial Deep Learning
DeepFense: Online Accelerated Defense Against Adversarial Deep Learning
B. Rouhani
Mohammad Samragh
Mojan Javaheripi
T. Javidi
F. Koushanfar
AAML
53
15
0
08 Sep 2017
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the
  iCub Humanoid
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid
Marco Melis
Ambra Demontis
Battista Biggio
Gavin Brown
Giorgio Fumera
Fabio Roli
AAML
79
98
0
23 Aug 2017
CNN Fixations: An unraveling approach to visualize the discriminative
  image regions
CNN Fixations: An unraveling approach to visualize the discriminative image regions
Konda Reddy Mopuri
Utsav Garg
R. Venkatesh Babu
AAML
91
56
0
22 Aug 2017
Towards Interpretable Deep Neural Networks by Leveraging Adversarial
  Examples
Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples
Yinpeng Dong
Hang Su
Jun Zhu
Fan Bao
AAML
143
129
0
18 Aug 2017
Learning Universal Adversarial Perturbations with Generative Models
Learning Universal Adversarial Perturbations with Generative Models
Jamie Hayes
G. Danezis
AAML
84
54
0
17 Aug 2017
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural
  Networks without Training Substitute Models
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Pin-Yu Chen
Huan Zhang
Yash Sharma
Jinfeng Yi
Cho-Jui Hsieh
AAML
115
1,894
0
14 Aug 2017
Robust Physical-World Attacks on Deep Learning Models
Robust Physical-World Attacks on Deep Learning Models
Kevin Eykholt
Ivan Evtimov
Earlence Fernandes
Yue Liu
Amir Rahmati
Chaowei Xiao
Atul Prakash
Tadayoshi Kohno
Basel Alomair
AAML
143
595
0
27 Jul 2017
Synthesizing Robust Adversarial Examples
Synthesizing Robust Adversarial Examples
Anish Athalye
Logan Engstrom
Ilya Sutskever
Kevin Kwok
AAML
68
66
0
24 Jul 2017
Confidence estimation in Deep Neural networks via density modelling
Confidence estimation in Deep Neural networks via density modelling
Akshayvarun Subramanya
Suraj Srinivas
R. Venkatesh Babu
65
51
0
21 Jul 2017
Efficient Defenses Against Adversarial Attacks
Efficient Defenses Against Adversarial Attacks
Valentina Zantedeschi
Maria-Irina Nicolae
Ambrish Rawat
AAML
74
297
0
21 Jul 2017
Fast Feature Fool: A data independent approach to universal adversarial
  perturbations
Fast Feature Fool: A data independent approach to universal adversarial perturbations
Konda Reddy Mopuri
Utsav Garg
R. Venkatesh Babu
AAML
126
205
0
18 Jul 2017
APE-GAN: Adversarial Perturbation Elimination with GAN
APE-GAN: Adversarial Perturbation Elimination with GAN
Shiwei Shen
Guoqing Jin
Feng Dai
Yongdong Zhang
GAN
122
221
0
18 Jul 2017
Houdini: Fooling Deep Structured Prediction Models
Houdini: Fooling Deep Structured Prediction Models
Moustapha Cissé
Yossi Adi
Natalia Neverova
Joseph Keshet
AAML
90
272
0
17 Jul 2017
Foolbox: A Python toolbox to benchmark the robustness of machine
  learning models
Foolbox: A Python toolbox to benchmark the robustness of machine learning models
Jonas Rauber
Wieland Brendel
Matthias Bethge
AAML
82
283
0
13 Jul 2017
NO Need to Worry about Adversarial Examples in Object Detection in
  Autonomous Vehicles
NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles
Jiajun Lu
Hussein Sibai
Evan Fabry
David A. Forsyth
AAML
104
282
0
12 Jul 2017
A Survey on Resilient Machine Learning
A Survey on Resilient Machine Learning
Atul Kumar
S. Mehta
OODAAML
83
16
0
11 Jul 2017
Adversarial Examples, Uncertainty, and Transfer Testing Robustness in
  Gaussian Process Hybrid Deep Networks
Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks
John Bradshaw
A. G. Matthews
Zoubin Ghahramani
BDLAAML
123
172
0
08 Jul 2017
UPSET and ANGRI : Breaking High Performance Image Classifiers
UPSET and ANGRI : Breaking High Performance Image Classifiers
Sayantan Sarkar
Ankan Bansal
U. Mahbub
Rama Chellappa
AAML
83
108
0
04 Jul 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
341
12,169
0
19 Jun 2017
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong
Warren He
James Wei
Xinyun Chen
Nicholas Carlini
Basel Alomair
AAML
114
242
0
15 Jun 2017
Previous
123...444546
Next