ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.04599
  4. Cited By
DeepFool: a simple and accurate method to fool deep neural networks
v1v2v3 (latest)

DeepFool: a simple and accurate method to fool deep neural networks

14 November 2015
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
    AAML
ArXiv (abs)PDFHTML

Papers citing "DeepFool: a simple and accurate method to fool deep neural networks"

50 / 2,298 papers shown
Title
Evaluation of Adversarial Training on Different Types of Neural Networks
  in Deep Learning-based IDSs
Evaluation of Adversarial Training on Different Types of Neural Networks in Deep Learning-based IDSs
Rana Abou-Khamis
Ashraf Matrawy
AAML
83
47
0
08 Jul 2020
How benign is benign overfitting?
How benign is benign overfitting?
Amartya Sanyal
P. Dokania
Varun Kanade
Philip Torr
NoLaAAML
89
58
0
08 Jul 2020
Adversarial Machine Learning Attacks and Defense Methods in the Cyber
  Security Domain
Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
Ishai Rosenberg
A. Shabtai
Yuval Elovici
Lior Rokach
AAML
89
12
0
05 Jul 2020
On Connections between Regularizations for Improving DNN Robustness
On Connections between Regularizations for Improving DNN Robustness
Yiwen Guo
Long Chen
Yurong Chen
Changshui Zhang
AAML
51
14
0
04 Jul 2020
Increasing Trustworthiness of Deep Neural Networks via Accuracy
  Monitoring
Increasing Trustworthiness of Deep Neural Networks via Accuracy Monitoring
Zhihui Shao
Jianyi Yang
Shaolei Ren
HILM
76
10
0
03 Jul 2020
Outlier Detection through Null Space Analysis of Neural Networks
Outlier Detection through Null Space Analysis of Neural Networks
Matthew Cook
A. Zare
P. Gader
65
19
0
02 Jul 2020
Generating Adversarial Examples with Controllable Non-transferability
Generating Adversarial Examples with Controllable Non-transferability
Renzhi Wang
Tianwei Zhang
Xiaofei Xie
Lei Ma
Cong Tian
Felix Juefei Xu
Yang Liu
SILMAAML
80
3
0
02 Jul 2020
Deep Learning Defenses Against Adversarial Examples for Dynamic Risk
  Assessment
Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment
Xabier Echeberria-Barrio
Amaia Gil-Lerchundi
Ines Goicoechea-Telleria
Raul Orduna Urrutia
AAML
72
5
0
02 Jul 2020
Robust and Accurate Authorship Attribution via Program Normalization
Robust and Accurate Authorship Attribution via Program Normalization
Yizhen Wang
Mohannad J. Alhanahnah
Ke Wang
Mihai Christodorescu
S. Jha
AAML
38
1
0
01 Jul 2020
Opportunities and Challenges in Deep Learning Adversarial Robustness: A
  Survey
Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey
S. Silva
Peyman Najafirad
AAMLOOD
106
135
0
01 Jul 2020
Adversarial Example Games
Adversarial Example Games
A. Bose
Gauthier Gidel
Hugo Berrard
Andre Cianflone
Pascal Vincent
Simon Lacoste-Julien
William L. Hamilton
AAMLGAN
105
52
0
01 Jul 2020
ConFoc: Content-Focus Protection Against Trojan Attacks on Neural
  Networks
ConFoc: Content-Focus Protection Against Trojan Attacks on Neural Networks
Miguel Villarreal-Vasquez
B. Bhargava
AAML
98
39
0
01 Jul 2020
Determining Sequence of Image Processing Technique (IPT) to Detect
  Adversarial Attacks
Determining Sequence of Image Processing Technique (IPT) to Detect Adversarial Attacks
Kishor Datta Gupta
Zahid Akhtar
D. Dasgupta
AAML
52
10
0
01 Jul 2020
Unifying Model Explainability and Robustness via Machine-Checkable
  Concepts
Unifying Model Explainability and Robustness via Machine-Checkable Concepts
Vedant Nanda
Till Speicher
John P. Dickerson
Krishna P. Gummadi
Muhammad Bilal Zafar
AAML
38
4
0
01 Jul 2020
Generating Adversarial Examples with an Optimized Quality
Generating Adversarial Examples with an Optimized Quality
Aminollah Khormali
Daehun Nyang
David A. Mohaisen
AAML
45
1
0
30 Jun 2020
Geometry-Inspired Top-k Adversarial Perturbations
Geometry-Inspired Top-k Adversarial Perturbations
Nurislam Tursynbek
Aleksandr Petiushko
Ivan Oseledets
AAML
83
10
0
28 Jun 2020
FDA3 : Federated Defense Against Adversarial Attacks for Cloud-Based
  IIoT Applications
FDA3 : Federated Defense Against Adversarial Attacks for Cloud-Based IIoT Applications
Yunfei Song
Tian Liu
Tongquan Wei
Xiangfeng Wang
Zhe Tao
Mingsong Chen
108
50
0
28 Jun 2020
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?
Kaidi Jin
Tianwei Zhang
Chao Shen
Yufei Chen
Ming Fan
Chenhao Lin
Ting Liu
AAML
43
14
0
26 Jun 2020
Orthogonal Deep Models As Defense Against Black-Box Attacks
Orthogonal Deep Models As Defense Against Black-Box Attacks
M. Jalwana
Naveed Akhtar
Bennamoun
Ajmal Mian
AAML
45
11
0
26 Jun 2020
Sparse-RS: a versatile framework for query-efficient sparse black-box
  adversarial attacks
Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks
Francesco Croce
Maksym Andriushchenko
Naman D. Singh
Nicolas Flammarion
Matthias Hein
105
101
0
23 Jun 2020
RayS: A Ray Searching Method for Hard-label Adversarial Attack
RayS: A Ray Searching Method for Hard-label Adversarial Attack
Jinghui Chen
Quanquan Gu
AAML
85
139
0
23 Jun 2020
Lipschitz Recurrent Neural Networks
Lipschitz Recurrent Neural Networks
N. Benjamin Erichson
Omri Azencot
A. Queiruga
Liam Hodgkinson
Michael W. Mahoney
90
112
0
22 Jun 2020
Students Need More Attention: BERT-based AttentionModel for Small Data
  with Application to AutomaticPatient Message Triage
Students Need More Attention: BERT-based AttentionModel for Small Data with Application to AutomaticPatient Message Triage
Shijing Si
Rui Wang
Jedrek Wosik
Hao Zhang
D. Dov
Guoyin Wang
Ricardo Henao
Lawrence Carin
65
26
0
22 Jun 2020
Network Moments: Extensions and Sparse-Smooth Attacks
Network Moments: Extensions and Sparse-Smooth Attacks
Modar Alfadly
Adel Bibi
Emilio Botero
Salman Alsubaihi
Guohao Li
AAML
51
2
0
21 Jun 2020
Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood
  Ensemble
Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble
Yi Zhou
Xiaoqing Zheng
Cho-Jui Hsieh
Kai-Wei Chang
Xuanjing Huang
SILM
105
48
0
20 Jun 2020
Differentiable Language Model Adversarial Attacks on Categorical
  Sequence Classifiers
Differentiable Language Model Adversarial Attacks on Categorical Sequence Classifiers
I. Fursov
A. Zaytsev
Nikita Klyuchnikov
A. Kravchenko
Evgeny Burnaev
AAMLSILM
45
5
0
19 Jun 2020
Adversarial Attacks for Multi-view Deep Models
Adversarial Attacks for Multi-view Deep Models
Xuli Sun
Shiliang Sun
AAML
29
0
0
19 Jun 2020
REGroup: Rank-aggregating Ensemble of Generative Classifiers for Robust
  Predictions
REGroup: Rank-aggregating Ensemble of Generative Classifiers for Robust Predictions
Lokender Tiwari
Anish Madan
Saket Anand
Subhashis Banerjee
AAML
37
1
0
18 Jun 2020
Adversarial Examples Detection and Analysis with Layer-wise Autoencoders
Adversarial Examples Detection and Analysis with Layer-wise Autoencoders
Bartosz Wójcik
P. Morawiecki
Marek Śmieja
Tomasz Krzy.zek
Przemysław Spurek
Jacek Tabor
GAN
67
13
0
17 Jun 2020
Adversarial Defense by Latent Style Transformations
Adversarial Defense by Latent Style Transformations
Shuo Wang
Surya Nepal
A. Abuadbba
Carsten Rudolph
M. Grobler
AAML
34
11
0
17 Jun 2020
Total Deep Variation: A Stable Regularizer for Inverse Problems
Total Deep Variation: A Stable Regularizer for Inverse Problems
Erich Kobler
Alexander Effland
K. Kunisch
Thomas Pock
MedIm
82
19
0
15 Jun 2020
Improving Adversarial Robustness via Unlabeled Out-of-Domain Data
Improving Adversarial Robustness via Unlabeled Out-of-Domain Data
Zhun Deng
Linjun Zhang
Amirata Ghorbani
James Zou
99
32
0
15 Jun 2020
Adversarial Attacks and Detection on Reinforcement Learning-Based
  Interactive Recommender Systems
Adversarial Attacks and Detection on Reinforcement Learning-Based Interactive Recommender Systems
Yuanjiang Cao
Xiaocong Chen
Lina Yao
Xianzhi Wang
W. Zhang
AAML
66
44
0
14 Jun 2020
PatchUp: A Feature-Space Block-Level Regularization Technique for
  Convolutional Neural Networks
PatchUp: A Feature-Space Block-Level Regularization Technique for Convolutional Neural Networks
Mojtaba Faramarzi
Mohammad Amini
Akilesh Badrinaaraayanan
Vikas Verma
A. Chandar
AAML
74
32
0
14 Jun 2020
Defensive Approximation: Securing CNNs using Approximate Computing
Defensive Approximation: Securing CNNs using Approximate Computing
Amira Guesmi
Ihsen Alouani
Khaled N. Khasawneh
M. Baklouti
T. Frikha
Mohamed Abid
Nael B. Abu-Ghazaleh
AAML
88
38
0
13 Jun 2020
Adversarial Self-Supervised Contrastive Learning
Adversarial Self-Supervised Contrastive Learning
Minseon Kim
Jihoon Tack
Sung Ju Hwang
SSL
90
251
0
13 Jun 2020
Targeted Adversarial Perturbations for Monocular Depth Prediction
Targeted Adversarial Perturbations for Monocular Depth Prediction
A. Wong
Safa Cicek
Stefano Soatto
AAMLMDE
62
45
0
12 Jun 2020
Towards Robust Pattern Recognition: A Review
Towards Robust Pattern Recognition: A Review
Xu-Yao Zhang
Cheng-Lin Liu
C. Suen
OODHAI
69
110
0
12 Jun 2020
Backdoors in Neural Models of Source Code
Backdoors in Neural Models of Source Code
Goutham Ramakrishnan
Aws Albarghouthi
AAMLSILM
141
58
0
11 Jun 2020
Achieving robustness in classification using optimal transport with
  hinge regularization
Achieving robustness in classification using optimal transport with hinge regularization
M. Serrurier
Franck Mamalet
Alberto González Sanz
Thibaut Boissin
Jean-Michel Loubes
E. del Barrio
AAML
58
40
0
11 Jun 2020
Protecting Against Image Translation Deepfakes by Leaking Universal
  Perturbations from Black-Box Neural Networks
Protecting Against Image Translation Deepfakes by Leaking Universal Perturbations from Black-Box Neural Networks
Nataniel Ruiz
Sarah Adel Bargal
Stan Sclaroff
AAML
63
11
0
11 Jun 2020
Adversarial Attack Vulnerability of Medical Image Analysis Systems:
  Unexplored Factors
Adversarial Attack Vulnerability of Medical Image Analysis Systems: Unexplored Factors
Gerda Bortsova
C. González-Gonzalo
S. Wetstein
Florian Dubost
Ioannis Katramados
...
Bram van Ginneken
J. Pluim
M. Veta
Clara I. Sánchez
Marleen de Bruijne
AAMLMedIm
47
131
0
11 Jun 2020
Exploring the Vulnerability of Deep Neural Networks: A Study of
  Parameter Corruption
Exploring the Vulnerability of Deep Neural Networks: A Study of Parameter Corruption
Xu Sun
Zhiyuan Zhang
Xuancheng Ren
Ruixuan Luo
Liangyou Li
68
39
0
10 Jun 2020
Provable tradeoffs in adversarially robust classification
Provable tradeoffs in adversarially robust classification
Yan Sun
Hamed Hassani
David Hong
Alexander Robey
107
56
0
09 Jun 2020
Towards an Intrinsic Definition of Robustness for a Classifier
Towards an Intrinsic Definition of Robustness for a Classifier
Théo Giraudon
Vincent Gripon
Matthias Löwe
Franck Vermet
OODAAML
25
2
0
09 Jun 2020
Picket: Guarding Against Corrupted Data in Tabular Data during Learning
  and Inference
Picket: Guarding Against Corrupted Data in Tabular Data during Learning and Inference
Zifan Liu
Zhechun Zhou
Theodoros Rekatsinas
50
16
0
08 Jun 2020
Adversarial Feature Desensitization
Adversarial Feature Desensitization
P. Bashivan
Reza Bayat
Adam Ibrahim
Kartik Ahuja
Mojtaba Faramarzi
Touraj Laleh
Blake A. Richards
Irina Rish
AAML
50
21
0
08 Jun 2020
Tricking Adversarial Attacks To Fail
Tricking Adversarial Attacks To Fail
Blerta Lindqvist
AAML
48
0
0
08 Jun 2020
Consistency Regularization for Certified Robustness of Smoothed
  Classifiers
Consistency Regularization for Certified Robustness of Smoothed Classifiers
Jongheon Jeong
Jinwoo Shin
AAML
86
88
0
07 Jun 2020
mFI-PSO: A Flexible and Effective Method in Adversarial Image Generation
  for Deep Neural Networks
mFI-PSO: A Flexible and Effective Method in Adversarial Image Generation for Deep Neural Networks
Hai Shu
Ronghua Shi
Qiran Jia
Hongtu Zhu
Ziqi Chen
AAML
42
2
0
05 Jun 2020
Previous
123...272829...444546
Next