ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.04599
  4. Cited By
DeepFool: a simple and accurate method to fool deep neural networks
v1v2v3 (latest)

DeepFool: a simple and accurate method to fool deep neural networks

14 November 2015
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
    AAML
ArXiv (abs)PDFHTML

Papers citing "DeepFool: a simple and accurate method to fool deep neural networks"

50 / 2,298 papers shown
Title
On Instabilities of Conventional Multi-Coil MRI Reconstruction to Small
  Adverserial Perturbations
On Instabilities of Conventional Multi-Coil MRI Reconstruction to Small Adverserial Perturbations
Chi Zhang
Jinghan Jia
Burhaneddin Yaman
S. Moeller
Sijia Liu
Mingyi Hong
Mehmet Akçakaya
AAML
56
8
0
25 Feb 2021
Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints
Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints
Maura Pintor
Fabio Roli
Wieland Brendel
Battista Biggio
AAML
92
73
0
25 Feb 2021
Robust SleepNets
Robust SleepNets
Yigit Can Alparslan
Edward J. Kim
AAML
30
1
0
24 Feb 2021
Graphfool: Targeted Label Adversarial Attack on Graph Embedding
Graphfool: Targeted Label Adversarial Attack on Graph Embedding
Jinyin Chen
Xiang Lin
Dunjie Zhang
Haibin Zheng
Guohan Huang
Hui Xiong
Xiang Lin
AAML
79
3
0
24 Feb 2021
Multiplicative Reweighting for Robust Neural Network Optimization
Multiplicative Reweighting for Robust Neural Network Optimization
Noga Bar
Tomer Koren
Raja Giryes
OODNoLa
83
9
0
24 Feb 2021
Non-Singular Adversarial Robustness of Neural Networks
Non-Singular Adversarial Robustness of Neural Networks
Yu-Lin Tsai
Chia-Yi Hsu
Chia-Mu Yu
Pin-Yu Chen
AAMLOOD
56
5
0
23 Feb 2021
Automated Discovery of Adaptive Attacks on Adversarial Defenses
Automated Discovery of Adaptive Attacks on Adversarial Defenses
Chengyuan Yao
Pavol Bielik
Petar Tsankov
Martin Vechev
AAML
99
24
0
23 Feb 2021
Rethinking Natural Adversarial Examples for Classification Models
Rethinking Natural Adversarial Examples for Classification Models
Xiao-Li Li
Jianmin Li
Ting Dai
Jie Shi
Jun Zhu
Xiaolin Hu
AAMLVLM
128
13
0
23 Feb 2021
Effective and Efficient Vote Attack on Capsule Networks
Effective and Efficient Vote Attack on Capsule Networks
Jindong Gu
Baoyuan Wu
Volker Tresp
AAML
70
27
0
19 Feb 2021
Random Projections for Improved Adversarial Robustness
Random Projections for Improved Adversarial Robustness
Ginevra Carbone
G. Sanguinetti
Luca Bortolussi
AAML
61
2
0
18 Feb 2021
Towards Adversarial-Resilient Deep Neural Networks for False Data
  Injection Attack Detection in Power Grids
Towards Adversarial-Resilient Deep Neural Networks for False Data Injection Attack Detection in Power Grids
Jiangnan Li
Yingyuan Yang
Jinyuan Stella Sun
K. Tomsovic
Hairong Qi
AAML
127
15
0
17 Feb 2021
Domain Impression: A Source Data Free Domain Adaptation Method
Domain Impression: A Source Data Free Domain Adaptation Method
V. Kurmi
Venkatesh Subramanian
Vinay P. Namboodiri
TTA
217
152
0
17 Feb 2021
Just Noticeable Difference for Deep Machine Vision
Just Noticeable Difference for Deep Machine Vision
Jian Jin
Xingxing Zhang
Xin Fu
Huan Zhang
Weisi Lin
Jian Lou
Yao Zhao
VLM
266
31
0
16 Feb 2021
Just Noticeable Difference for Machine Perception and Generation of
  Regularized Adversarial Images with Minimal Perturbation
Just Noticeable Difference for Machine Perception and Generation of Regularized Adversarial Images with Minimal Perturbation
Adil Kaan Akan
Emre Akbas
Fatoş T. Yarman Vural
AAML
33
3
0
16 Feb 2021
Robust Classification using Hidden Markov Models and Mixtures of
  Normalizing Flows
Robust Classification using Hidden Markov Models and Mixtures of Normalizing Flows
Anubhab Ghosh
Antoine Honoré
Dong Liu
G. Henter
Saikat Chatterjee
BDLVLM
54
7
0
15 Feb 2021
Resilient Machine Learning for Networked Cyber Physical Systems: A
  Survey for Machine Learning Security to Securing Machine Learning for CPS
Resilient Machine Learning for Networked Cyber Physical Systems: A Survey for Machine Learning Security to Securing Machine Learning for CPS
Felix O. Olowononi
D. Rawat
Chunmei Liu
95
138
0
14 Feb 2021
Adversarial Attack on Network Embeddings via Supervised Network
  Poisoning
Adversarial Attack on Network Embeddings via Supervised Network Poisoning
Viresh Gupta
Tanmoy Chakraborty
AAML
74
12
0
14 Feb 2021
A Computability Perspective on (Verified) Machine Learning
A Computability Perspective on (Verified) Machine Learning
T. Crook
J. Morgan
A. Pauly
M. Roggenbach
FaML
39
3
0
12 Feb 2021
Universal Adversarial Perturbations Through the Lens of Deep
  Steganography: Towards A Fourier Perspective
Universal Adversarial Perturbations Through the Lens of Deep Steganography: Towards A Fourier Perspective
Chaoning Zhang
Philipp Benz
Adil Karjauv
In So Kweon
AAML
94
42
0
12 Feb 2021
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Kiarash Banihashem
Adish Singla
Goran Radanović
AAML
92
27
0
10 Feb 2021
Dompteur: Taming Audio Adversarial Examples
Dompteur: Taming Audio Adversarial Examples
Thorsten Eisenhofer
Lea Schonherr
Joel Frank
Lars Speckemeier
D. Kolossa
Thorsten Holz
AAML
85
25
0
10 Feb 2021
RoBIC: A benchmark suite for assessing classifiers robustness
RoBIC: A benchmark suite for assessing classifiers robustness
Thibault Maho
Benoît Bonnet
Teddy Furon
Erwan Le Merrer
AAML
56
4
0
10 Feb 2021
Detecting Localized Adversarial Examples: A Generic Approach using
  Critical Region Analysis
Detecting Localized Adversarial Examples: A Generic Approach using Critical Region Analysis
Fengting Li
Xuankai Liu
Xiaoli Zhang
Qi Li
Kun Sun
Kang Li
AAML
73
13
0
10 Feb 2021
Adversarial Perturbations Are Not So Weird: Entanglement of Robust and
  Non-Robust Features in Neural Network Classifiers
Adversarial Perturbations Are Not So Weird: Entanglement of Robust and Non-Robust Features in Neural Network Classifiers
Jacob Mitchell Springer
Melanie Mitchell
Garrett Kenyon
AAML
56
13
0
09 Feb 2021
Security and Privacy for Artificial Intelligence: Opportunities and
  Challenges
Security and Privacy for Artificial Intelligence: Opportunities and Challenges
Ayodeji Oseni
Nour Moustafa
Helge Janicke
Peng Liu
Z. Tari
A. Vasilakos
AAML
67
52
0
09 Feb 2021
Exploiting epistemic uncertainty of the deep learning models to generate
  adversarial samples
Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples
Ömer Faruk Tuna
Ferhat Ozgur Catak
M. T. Eskil
AAML
90
33
0
08 Feb 2021
Adversarial Imaging Pipelines
Adversarial Imaging Pipelines
Buu Phan
Fahim Mannan
Felix Heide
AAML
53
26
0
07 Feb 2021
SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation
SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation
Wuxinlin Cheng
Chenhui Deng
Zhiqiang Zhao
Yaohui Cai
Zhiru Zhang
Zhuo Feng
AAML
73
14
0
07 Feb 2021
Adversarial Attacks and Defenses in Physiological Computing: A
  Systematic Review
Adversarial Attacks and Defenses in Physiological Computing: A Systematic Review
Dongrui Wu
Jiaxin Xu
Weili Fang
Yi Zhang
Liuqing Yang
Xiaodong Xu
Hanbin Luo
Xiang Yu
AAML
127
25
0
04 Feb 2021
IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural
  Networks
IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural Networks
Yixiang Wang
Jiqiang Liu
Xiaolin Chang
J. Misic
Vojislav B. Mišić
AAML
69
12
0
03 Feb 2021
Key Technology Considerations in Developing and Deploying Machine
  Learning Models in Clinical Radiology Practice
Key Technology Considerations in Developing and Deploying Machine Learning Models in Clinical Radiology Practice
V. Kulkarni
M. Gawali
A. Kharat
VLM
117
21
0
03 Feb 2021
Towards Robust Neural Networks via Close-loop Control
Towards Robust Neural Networks via Close-loop Control
Zhuotong Chen
Qianxiao Li
Zheng Zhang
OODAAML
82
25
0
03 Feb 2021
Landmark Breaker: Obstructing DeepFake By Disturbing Landmark Extraction
Landmark Breaker: Obstructing DeepFake By Disturbing Landmark Extraction
Pu Sun
Yuezun Li
H. Qi
Siwei Lyu
55
17
0
01 Feb 2021
Admix: Enhancing the Transferability of Adversarial Attacks
Admix: Enhancing the Transferability of Adversarial Attacks
Xiaosen Wang
Xu He
Jingdong Wang
Kun He
AAML
151
201
0
31 Jan 2021
Cortical Features for Defense Against Adversarial Audio Attacks
Cortical Features for Defense Against Adversarial Audio Attacks
Ilya Kavalerov
Frank Zheng
W. Czaja
Ramalingam Chellappa
AAML
49
0
0
30 Jan 2021
You Only Query Once: Effective Black Box Adversarial Attacks with
  Minimal Repeated Queries
You Only Query Once: Effective Black Box Adversarial Attacks with Minimal Repeated Queries
Devin Willmott
Anit Kumar Sahu
Fatemeh Sheikholeslami
Filipe Condessa
Zico Kolter
MLAUAAML
61
3
0
29 Jan 2021
Adversarial Learning with Cost-Sensitive Classes
Adversarial Learning with Cost-Sensitive Classes
Hao Shen
Sihong Chen
Ran Wang
Xizhao Wang
AAML
70
11
0
29 Jan 2021
Improving Neural Network Robustness through Neighborhood Preserving
  Layers
Improving Neural Network Robustness through Neighborhood Preserving Layers
Bingyuan Liu
Christopher Malon
Lingzhou Xue
E. Kruus
AAML
35
5
0
28 Jan 2021
Detecting Adversarial Examples by Input Transformations, Defense
  Perturbations, and Voting
Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting
F. Nesti
Alessandro Biondi
Giorgio Buttazzo
AAML
46
40
0
27 Jan 2021
The Effect of Class Definitions on the Transferability of Adversarial
  Attacks Against Forensic CNNs
The Effect of Class Definitions on the Transferability of Adversarial Attacks Against Forensic CNNs
Xinwei Zhao
Matthew C. Stamm
AAML
49
4
0
26 Jan 2021
Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers
Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers
Xinwei Zhao
Matthew C. Stamm
AAML
47
3
0
26 Jan 2021
Visual explanation of black-box model: Similarity Difference and
  Uniqueness (SIDU) method
Visual explanation of black-box model: Similarity Difference and Uniqueness (SIDU) method
Satya M. Muddamsetty
M. N. Jahromi
Andreea-Emilia Ciontos
Laura M. Fenoy
T. Moeslund
AAML
90
26
0
26 Jan 2021
Spectral Leakage and Rethinking the Kernel Size in CNNs
Spectral Leakage and Rethinking the Kernel Size in CNNs
Nergis Tomen
Jan van Gemert
AAML
61
19
0
25 Jan 2021
A Survey on Active Deep Learning: From Model-driven to Data-driven
Peng Liu
Lizhe Wang
Guojin He
Lei Zhao
85
14
0
25 Jan 2021
A Comprehensive Evaluation Framework for Deep Model Robustness
A Comprehensive Evaluation Framework for Deep Model Robustness
Jun Guo
Wei Bao
Jiakai Wang
Yuqing Ma
Xing Gao
Gang Xiao
Aishan Liu
Zehao Zhao
Xianglong Liu
Wenjun Wu
AAMLELM
97
61
0
24 Jan 2021
A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative
  Adversarial Network
A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative Adversarial Network
Xinwei Zhao
Chen Chen
Matthew C. Stamm
GANAAML
41
4
0
23 Jan 2021
Online Adversarial Purification based on Self-Supervision
Online Adversarial Purification based on Self-Supervision
Changhao Shi
Chester Holtz
Zhengchao Wan
AAML
82
57
0
23 Jan 2021
Generating Black-Box Adversarial Examples in Sparse Domain
Generating Black-Box Adversarial Examples in Sparse Domain
Ieee Hadi Zanddizari Student Member
Behnam Zeinali
Jerome Chang
AAML
46
7
0
22 Jan 2021
A Person Re-identification Data Augmentation Method with Adversarial
  Defense Effect
A Person Re-identification Data Augmentation Method with Adversarial Defense Effect
Yunpeng Gong
Zhiyong Zeng
Liwen Chen
Yi-Xiao Luo
Bin Weng
Feng Ye
AAML
83
19
0
21 Jan 2021
Can stable and accurate neural networks be computed? -- On the barriers
  of deep learning and Smale's 18th problem
Can stable and accurate neural networks be computed? -- On the barriers of deep learning and Smale's 18th problem
Matthew J. Colbrook
Vegard Antun
A. Hansen
119
136
0
20 Jan 2021
Previous
123...222324...444546
Next