ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.04599
  4. Cited By
DeepFool: a simple and accurate method to fool deep neural networks
v1v2v3 (latest)

DeepFool: a simple and accurate method to fool deep neural networks

14 November 2015
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
    AAML
ArXiv (abs)PDFHTML

Papers citing "DeepFool: a simple and accurate method to fool deep neural networks"

48 / 2,298 papers shown
Title
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
Yizhen Wang
S. Jha
Kamalika Chaudhuri
AAML
234
155
0
13 Jun 2017
Towards Robust Detection of Adversarial Examples
Towards Robust Detection of Adversarial Examples
Tianyu Pang
Chao Du
Yinpeng Dong
Jun Zhu
AAML
87
18
0
02 Jun 2017
Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial
  Examples
Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples
Weilin Xu
David Evans
Yanjun Qi
AAML
68
42
0
30 May 2017
Classification regions of deep neural networks
Classification regions of deep neural networks
Alhussein Fawzi
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
Stefano Soatto
86
51
0
26 May 2017
MagNet: a Two-Pronged Defense against Adversarial Examples
MagNet: a Two-Pronged Defense against Adversarial Examples
Dongyu Meng
Hao Chen
AAML
56
1,210
0
25 May 2017
Formal Guarantees on the Robustness of a Classifier against Adversarial
  Manipulation
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation
Matthias Hein
Maksym Andriushchenko
AAML
131
512
0
23 May 2017
Detecting Adversarial Image Examples in Deep Networks with Adaptive
  Noise Reduction
Detecting Adversarial Image Examples in Deep Networks with Adaptive Noise Reduction
Bin Liang
Hongcheng Li
Miaoqiang Su
Xirong Li
Wenchang Shi
Xiaofeng Wang
AAML
131
219
0
23 May 2017
Regularizing deep networks using efficient layerwise adversarial
  training
Regularizing deep networks using efficient layerwise adversarial training
S. Sankaranarayanan
Arpit Jain
Rama Chellappa
Ser Nam Lim
AAML
90
97
0
22 May 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
140
1,869
0
20 May 2017
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial
  Attacks with Moving Target Defense
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense
Sailik Sengupta
Tathagata Chakraborti
S. Kambhampati
AAML
137
63
0
19 May 2017
Ensemble Adversarial Training: Attacks and Defenses
Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr
Alexey Kurakin
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
217
2,738
0
19 May 2017
Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with
  JPEG Compression
Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression
Nilaksh Das
Madhuri Shanbhogue
Shang-Tse Chen
Fred Hohman
Li-Wei Chen
Michael E. Kounavis
Duen Horng Chau
AAML
89
307
0
08 May 2017
DeepCorrect: Correcting DNN models against Image Distortions
DeepCorrect: Correcting DNN models against Image Distortions
Tejas S. Borkar
Lina Karam
129
93
0
05 May 2017
Parseval Networks: Improving Robustness to Adversarial Examples
Parseval Networks: Improving Robustness to Adversarial Examples
Moustapha Cissé
Piotr Bojanowski
Edouard Grave
Yann N. Dauphin
Nicolas Usunier
AAML
156
808
0
28 Apr 2017
Deep Text Classification Can be Fooled
Deep Text Classification Can be Fooled
Bin Liang
Hongcheng Li
Miaoqiang Su
Pan Bian
Xirong Li
Wenchang Shi
AAML
85
427
0
26 Apr 2017
Universal Adversarial Perturbations Against Semantic Image Segmentation
Universal Adversarial Perturbations Against Semantic Image Segmentation
J. H. Metzen
Mummadi Chaithanya Kumar
Thomas Brox
Volker Fischer
AAML
177
288
0
19 Apr 2017
The Space of Transferable Adversarial Examples
The Space of Transferable Adversarial Examples
Florian Tramèr
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAMLSILM
127
558
0
11 Apr 2017
Enhancing Robustness of Machine Learning Systems via Data
  Transformations
Enhancing Robustness of Machine Learning Systems via Data Transformations
A. Bhagoji
Daniel Cullina
Chawin Sitawarin
Prateek Mittal
AAML
114
231
0
09 Apr 2017
Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks
Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks
Yi Han
Benjamin I. P. Rubinstein
SILMAAML
66
6
0
06 Apr 2017
Feature Squeezing: Detecting Adversarial Examples in Deep Neural
  Networks
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
Weilin Xu
David Evans
Yanjun Qi
AAML
104
1,283
0
04 Apr 2017
SafetyNet: Detecting and Rejecting Adversarial Examples Robustly
SafetyNet: Detecting and Rejecting Adversarial Examples Robustly
Jiajun Lu
Theerasit Issaranon
David A. Forsyth
GAN
120
381
0
01 Apr 2017
Adversarial Image Perturbation for Privacy Protection -- A Game Theory
  Perspective
Adversarial Image Perturbation for Privacy Protection -- A Game Theory Perspective
Seong Joon Oh
Mario Fritz
Bernt Schiele
CVBMAAML
431
162
0
28 Mar 2017
Adversarial Transformation Networks: Learning to Generate Adversarial
  Examples
Adversarial Transformation Networks: Learning to Generate Adversarial Examples
S. Baluja
Ian S. Fischer
GAN
87
286
0
28 Mar 2017
Adversarial Examples for Semantic Segmentation and Object Detection
Adversarial Examples for Semantic Segmentation and Object Detection
Cihang Xie
Jianyu Wang
Zhishuai Zhang
Yuyin Zhou
Lingxi Xie
Alan Yuille
GANAAML
113
935
0
24 Mar 2017
Quality Resilient Deep Neural Networks
Quality Resilient Deep Neural Networks
Samuel F. Dodge
Lina Karam
OOD
70
46
0
23 Mar 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
234
2,916
0
14 Mar 2017
Blocking Transferability of Adversarial Examples in Black-Box Learning
  Systems
Blocking Transferability of Adversarial Examples in Black-Box Learning Systems
Hossein Hosseini
Yize Chen
Sreeram Kannan
Baosen Zhang
Radha Poovendran
AAML
90
107
0
13 Mar 2017
Tactics of Adversarial Attack on Deep Reinforcement Learning Agents
Tactics of Adversarial Attack on Deep Reinforcement Learning Agents
Yen-Chen Lin
Zhang-Wei Hong
Yuan-Hong Liao
Meng-Li Shih
Ming-Yuan Liu
Min Sun
AAML
141
418
0
08 Mar 2017
Compositional Falsification of Cyber-Physical Systems with Machine
  Learning Components
Compositional Falsification of Cyber-Physical Systems with Machine Learning Components
T. Dreossi
Alexandre Donzé
Sanjit A. Seshia
AAML
131
231
0
02 Mar 2017
Robustness to Adversarial Examples through an Ensemble of Specialists
Robustness to Adversarial Examples through an Ensemble of Specialists
Mahdieh Abbasi
Christian Gagné
AAML
117
109
0
22 Feb 2017
Adversarial examples for generative models
Adversarial examples for generative models
Jernej Kos
Ian S. Fischer
Basel Alomair
GAN
95
274
0
22 Feb 2017
On Detecting Adversarial Perturbations
On Detecting Adversarial Perturbations
J. H. Metzen
Tim Genewein
Volker Fischer
Bastian Bischoff
AAML
125
952
0
14 Feb 2017
Adversarial Examples Detection in Deep Networks with Convolutional
  Filter Statistics
Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics
Xin Li
Fuxin Li
GANAAML
145
366
0
22 Dec 2016
Simple Black-Box Adversarial Perturbations for Deep Networks
Simple Black-Box Adversarial Perturbations for Deep Networks
Nina Narodytska
S. Kasiviswanathan
AAML
84
239
0
19 Dec 2016
Deep Variational Information Bottleneck
Deep Variational Information Bottleneck
Alexander A. Alemi
Ian S. Fischer
Joshua V. Dillon
Kevin Patrick Murphy
203
1,735
0
01 Dec 2016
A Theoretical Framework for Robustness of (Deep) Classifiers against
  Adversarial Examples
A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples
Beilun Wang
Ji Gao
Yanjun Qi
AAML
54
30
0
01 Dec 2016
Towards the Science of Security and Privacy in Machine Learning
Towards the Science of Security and Privacy in Machine Learning
Nicolas Papernot
Patrick McDaniel
Arunesh Sinha
Michael P. Wellman
AAML
99
474
0
11 Nov 2016
Universal adversarial perturbations
Universal adversarial perturbations
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
267
2,534
0
26 Oct 2016
Safety Verification of Deep Neural Networks
Safety Verification of Deep Neural Networks
Xiaowei Huang
Marta Kwiatkowska
Sen Wang
Min Wu
AAML
290
945
0
21 Oct 2016
Are Accuracy and Robustness Correlated?
Are Accuracy and Robustness Correlated?
Andras Rozsa
Manuel Günther
Terrance E. Boult
AAML
78
61
0
14 Oct 2016
Technical Report on the CleverHans v2.1.0 Adversarial Examples Library
Technical Report on the CleverHans v2.1.0 Adversarial Examples Library
Nicolas Papernot
Fartash Faghri
Nicholas Carlini
Ian Goodfellow
Reuben Feinman
...
David Berthelot
P. Hendricks
Jonas Rauber
Rujun Long
Patrick McDaniel
AAML
98
516
0
03 Oct 2016
Robustness of classifiers: from adversarial to random noise
Robustness of classifiers: from adversarial to random noise
Alhussein Fawzi
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
110
376
0
31 Aug 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OODAAML
290
8,604
0
16 Aug 2016
Towards Verified Artificial Intelligence
Towards Verified Artificial Intelligence
Sanjit A. Seshia
Dorsa Sadigh
S. Shankar Sastry
128
203
0
27 Jun 2016
Measuring Neural Net Robustness with Constraints
Measuring Neural Net Robustness with Constraints
Osbert Bastani
Yani Andrew Ioannou
Leonidas Lampropoulos
Dimitrios Vytiniotis
A. Nori
A. Criminisi
AAML
106
424
0
24 May 2016
Explaining NonLinear Classification Decisions with Deep Taylor
  Decomposition
Explaining NonLinear Classification Decisions with Deep Taylor Decomposition
G. Montavon
Sebastian Lapuschkin
Alexander Binder
Wojciech Samek
Klaus-Robert Muller
FAtt
77
743
0
08 Dec 2015
Understanding Adversarial Training: Increasing Local Stability of Neural
  Nets through Robust Optimization
Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization
Uri Shaham
Yutaro Yamada
S. Negahban
AAML
84
78
0
17 Nov 2015
Analysis of classifiers' robustness to adversarial perturbations
Analysis of classifiers' robustness to adversarial perturbations
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
107
360
0
09 Feb 2015
Previous
123...444546