ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.04599
  4. Cited By
DeepFool: a simple and accurate method to fool deep neural networks
v1v2v3 (latest)

DeepFool: a simple and accurate method to fool deep neural networks

14 November 2015
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
    AAML
ArXiv (abs)PDFHTML

Papers citing "DeepFool: a simple and accurate method to fool deep neural networks"

50 / 2,298 papers shown
Title
Internet of Predictable Things (IoPT) Framework to Increase
  Cyber-Physical System Resiliency
Internet of Predictable Things (IoPT) Framework to Increase Cyber-Physical System Resiliency
Umit Cali
Murat Kuzlu
Vinayak Sharma
M. Pipattanasomporn
Ferhat Ozgur Catak
13
1
0
19 Jan 2021
PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack
PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack
Jie Wang
Z. Yin
Jin Tang
Jing Jiang
Bin Luo
AAML
64
2
0
19 Jan 2021
Attention-Guided Black-box Adversarial Attacks with Large-Scale
  Multiobjective Evolutionary Optimization
Attention-Guided Black-box Adversarial Attacks with Large-Scale Multiobjective Evolutionary Optimization
Jie Wang
Z. Yin
Jing Jiang
Yang Du
AAML
101
8
0
19 Jan 2021
Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
  Self Driving
Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving
James Tu
Huichen Li
Xinchen Yan
Mengye Ren
Yun Chen
Ming Liang
E. Bitar
Ersin Yumer
R. Urtasun
AAML
88
78
0
17 Jan 2021
Context-Aware Image Denoising with Auto-Threshold Canny Edge Detection
  to Suppress Adversarial Perturbation
Context-Aware Image Denoising with Auto-Threshold Canny Edge Detection to Suppress Adversarial Perturbation
Li-Yun Wang
Yeganeh Jalalpour
W. Feng
39
0
0
14 Jan 2021
Robustness of on-device Models: Adversarial Attack to Deep Learning
  Models on Android Apps
Robustness of on-device Models: Adversarial Attack to Deep Learning Models on Android Apps
Yujin Huang
Han Hu
Chunyang Chen
AAMLFedML
115
33
0
12 Jan 2021
Adversary Instantiation: Lower Bounds for Differentially Private Machine
  Learning
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
Milad Nasr
Shuang Song
Abhradeep Thakurta
Nicolas Papernot
Nicholas Carlini
MIACVFedML
173
226
0
11 Jan 2021
The Vulnerability of Semantic Segmentation Networks to Adversarial
  Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
Andreas Bär
Jonas Löhdefink
Nikhil Kapoor
Serin Varghese
Fabian Hüger
Peter Schlicht
Tim Fingscheidt
AAML
192
35
0
11 Jan 2021
SyReNN: A Tool for Analyzing Deep Neural Networks
SyReNN: A Tool for Analyzing Deep Neural Networks
Matthew Sotoudeh
Aditya V. Thakur
AAMLGNN
63
16
0
09 Jan 2021
Towards a Robust and Trustworthy Machine Learning System Development: An
  Engineering Perspective
Towards a Robust and Trustworthy Machine Learning System Development: An Engineering Perspective
Pulei Xiong
Scott Buffett
Shahrear Iqbal
Philippe Lamontagne
M. Mamun
Heather Molyneaux
OOD
81
15
0
08 Jan 2021
Adversarial Attack Attribution: Discovering Attributable Signals in
  Adversarial ML Attacks
Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks
Marissa Dotter
Sherry Xie
Keith Manville
Josh Harguess
Colin Busho
Mikel Rodriguez
AAML
45
2
0
08 Jan 2021
Adversarial Machine Learning for 5G Communications Security
Adversarial Machine Learning for 5G Communications Security
Y. Sagduyu
T. Erpek
Yi Shi
AAML
85
43
0
07 Jan 2021
Corner case data description and detection
Corner case data description and detection
Tinghui Ouyang
Vicent Sant Marco
Yoshinao Isobe
H. Asoh
Y. Oiwa
Yoshiki Seo
AAML
57
13
0
07 Jan 2021
Understanding the Error in Evaluating Adversarial Robustness
Understanding the Error in Evaluating Adversarial Robustness
Pengfei Xia
Ziqiang Li
Hongjing Niu
Bin Li
AAMLELM
76
5
0
07 Jan 2021
Practical Blind Membership Inference Attack via Differential Comparisons
Practical Blind Membership Inference Attack via Differential Comparisons
Bo Hui
Yuchen Yang
Haolin Yuan
Philippe Burlina
Neil Zhenqiang Gong
Yinzhi Cao
MIACV
194
124
0
05 Jan 2021
Local Competition and Stochasticity for Adversarial Robustness in Deep
  Learning
Local Competition and Stochasticity for Adversarial Robustness in Deep Learning
Konstantinos P. Panousis
S. Chatzis
Antonios Alexos
Sergios Theodoridis
BDLAAMLOOD
112
19
0
04 Jan 2021
Local Black-box Adversarial Attacks: A Query Efficient Approach
Local Black-box Adversarial Attacks: A Query Efficient Approach
Tao Xiang
Hangcheng Liu
Shangwei Guo
Tianwei Zhang
X. Liao
AAMLMLAU
46
15
0
04 Jan 2021
Active Learning Under Malicious Mislabeling and Poisoning Attacks
Active Learning Under Malicious Mislabeling and Poisoning Attacks
Jing Lin
R. Luley
Kaiqi Xiong
AAML
83
8
0
01 Jan 2021
Patch-wise++ Perturbation for Adversarial Targeted Attacks
Patch-wise++ Perturbation for Adversarial Targeted Attacks
Lianli Gao
Qilong Zhang
Jingkuan Song
Heng Tao Shen
AAML
120
19
0
31 Dec 2020
Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial
  Attacks for Online Visual Object Trackers
Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial Attacks for Online Visual Object Trackers
Krishna Kanth Nakka
Mathieu Salzmann
AAML
31
5
0
30 Dec 2020
With False Friends Like These, Who Can Notice Mistakes?
With False Friends Like These, Who Can Notice Mistakes?
Lue Tao
Lei Feng
Jinfeng Yi
Songcan Chen
AAML
70
6
0
29 Dec 2020
Analysis of Dominant Classes in Universal Adversarial Perturbations
Analysis of Dominant Classes in Universal Adversarial Perturbations
Jon Vadillo
Roberto Santana
Jose A. Lozano
AAML
64
5
0
28 Dec 2020
Adversarial Momentum-Contrastive Pre-Training
Adversarial Momentum-Contrastive Pre-Training
Cong Xu
Dan Li
Min Yang
SSL
74
15
0
24 Dec 2020
The Translucent Patch: A Physical and Universal Attack on Object
  Detectors
The Translucent Patch: A Physical and Universal Attack on Object Detectors
Alon Zolfi
Moshe Kravchik
Yuval Elovici
A. Shabtai
AAML
67
89
0
23 Dec 2020
Discovering Robust Convolutional Architecture at Targeted Capacity: A
  Multi-Shot Approach
Discovering Robust Convolutional Architecture at Targeted Capacity: A Multi-Shot Approach
Xuefei Ning
Jiaqi Zhao
Wenshuo Li
Tianchen Zhao
Yin Zheng
Huazhong Yang
Yu Wang
AAML
95
5
0
22 Dec 2020
Defence against adversarial attacks using classical and quantum-enhanced
  Boltzmann machines
Defence against adversarial attacks using classical and quantum-enhanced Boltzmann machines
Aidan Kehoe
P. Wittek
Yanbo Xue
Alejandro Pozas-Kerstjens
AAML
82
7
0
21 Dec 2020
Blurring Fools the Network -- Adversarial Attacks by Feature Peak
  Suppression and Gaussian Blurring
Blurring Fools the Network -- Adversarial Attacks by Feature Peak Suppression and Gaussian Blurring
Chenchen Zhao
Hao Li
AAML
27
3
0
21 Dec 2020
Exploiting Vulnerability of Pooling in Convolutional Neural Networks by
  Strict Layer-Output Manipulation for Adversarial Attacks
Exploiting Vulnerability of Pooling in Convolutional Neural Networks by Strict Layer-Output Manipulation for Adversarial Attacks
Chenchen Zhao
Hao Li
AAML
41
0
0
21 Dec 2020
Color Channel Perturbation Attacks for Fooling Convolutional Neural
  Networks and A Defense Against Such Attacks
Color Channel Perturbation Attacks for Fooling Convolutional Neural Networks and A Defense Against Such Attacks
Jayendra Kantipudi
S. Dubey
Soumendu Chakraborty
AAML
91
22
0
20 Dec 2020
ROBY: Evaluating the Robustness of a Deep Model by its Decision
  Boundaries
ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries
Jinyin Chen
Zhen Wang
Haibin Zheng
Jun Xiao
Zhaoyan Ming
AAML
85
5
0
18 Dec 2020
Semantics and explanation: why counterfactual explanations produce
  adversarial examples in deep neural networks
Semantics and explanation: why counterfactual explanations produce adversarial examples in deep neural networks
Kieran Browne
Ben Swift
AAMLGAN
58
30
0
18 Dec 2020
A Hierarchical Feature Constraint to Camouflage Medical Adversarial
  Attacks
A Hierarchical Feature Constraint to Camouflage Medical Adversarial Attacks
Qingsong Yao
Zecheng He
Yi Lin
Kai Ma
Yefeng Zheng
S. Kevin Zhou
AAMLMedIm
109
16
0
17 Dec 2020
Adversarial trading
Adversarial trading
Alexandre Miot
AAML
56
1
0
16 Dec 2020
Exacerbating Algorithmic Bias through Fairness Attacks
Exacerbating Algorithmic Bias through Fairness Attacks
Ninareh Mehrabi
Muhammad Naveed
Fred Morstatter
Aram Galstyan
AAML
91
69
0
16 Dec 2020
FAWA: Fast Adversarial Watermark Attack on Optical Character Recognition
  (OCR) Systems
FAWA: Fast Adversarial Watermark Attack on Optical Character Recognition (OCR) Systems
Lu Chen
Jiao Sun
Wenyuan Xu
AAML
35
16
0
15 Dec 2020
Robustness Threats of Differential Privacy
Robustness Threats of Differential Privacy
Nurislam Tursynbek
Aleksandr Petiushko
Ivan Oseledets
AAML
97
14
0
14 Dec 2020
Achieving Adversarial Robustness Requires An Active Teacher
Achieving Adversarial Robustness Requires An Active Teacher
Chao Ma
Lexing Ying
71
1
0
14 Dec 2020
Closeness and Uncertainty Aware Adversarial Examples Detection in
  Adversarial Machine Learning
Closeness and Uncertainty Aware Adversarial Examples Detection in Adversarial Machine Learning
Ömer Faruk Tuna
Ferhat Ozgur Catak
M. T. Eskil
AAML
83
11
0
11 Dec 2020
SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image
  Classifiers
SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image Classifiers
Bingyao Huang
Haibin Ling
AAML
79
20
0
10 Dec 2020
Generating Out of Distribution Adversarial Attack using Latent Space
  Poisoning
Generating Out of Distribution Adversarial Attack using Latent Space Poisoning
Ujjwal Upadhyay
Prerana Mukherjee
78
7
0
09 Dec 2020
Risk Management Framework for Machine Learning Security
Risk Management Framework for Machine Learning Security
J. Breier
A. Baldwin
H. Balinsky
Yang Liu
AAML
31
3
0
09 Dec 2020
Are DNNs fooled by extremely unrecognizable images?
Are DNNs fooled by extremely unrecognizable images?
Soichiro Kumano
Hiroshi Kera
T. Yamasaki
AAML
44
3
0
07 Dec 2020
Backpropagating Linearly Improves Transferability of Adversarial
  Examples
Backpropagating Linearly Improves Transferability of Adversarial Examples
Yiwen Guo
Qizhang Li
Hao Chen
FedMLAAML
82
116
0
07 Dec 2020
A Singular Value Perspective on Model Robustness
A Singular Value Perspective on Model Robustness
Malhar Jere
Maghav Kumar
F. Koushanfar
AAML
86
6
0
07 Dec 2020
Practical No-box Adversarial Attacks against DNNs
Practical No-box Adversarial Attacks against DNNs
Qizhang Li
Yiwen Guo
Hao Chen
AAML
75
59
0
04 Dec 2020
Towards Natural Robustness Against Adversarial Examples
Towards Natural Robustness Against Adversarial Examples
Haoyu Chu
Shikui Wei
Yao-Min Zhao
AAML
26
1
0
04 Dec 2020
Visually Imperceptible Adversarial Patch Attacks on Digital Images
Visually Imperceptible Adversarial Patch Attacks on Digital Images
Yaguan Qian
Jiamin Wang
Bin Wang
Xiang Ling
Zhaoquan Gu
Chunming Wu
Wassim Swaileh
AAML
66
2
0
02 Dec 2020
Overcoming Measurement Inconsistency in Deep Learning for Linear Inverse
  Problems: Applications in Medical Imaging
Overcoming Measurement Inconsistency in Deep Learning for Linear Inverse Problems: Applications in Medical Imaging
Marija Vella
João F. C. Mota
110
4
0
29 Nov 2020
A Targeted Universal Attack on Graph Convolutional Network
A Targeted Universal Attack on Graph Convolutional Network
Jiazhu Dai
Weifeng Zhu
Xiangfeng Luo
AAMLGNN
44
20
0
29 Nov 2020
FaceGuard: A Self-Supervised Defense Against Adversarial Face Images
FaceGuard: A Self-Supervised Defense Against Adversarial Face Images
Debayan Deb
Xiaoming Liu
Anil K. Jain
CVBMAAMLPICV
98
27
0
28 Nov 2020
Previous
123...232425...444546
Next