ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.04599
  4. Cited By
DeepFool: a simple and accurate method to fool deep neural networks
v1v2v3 (latest)

DeepFool: a simple and accurate method to fool deep neural networks

14 November 2015
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
    AAML
ArXiv (abs)PDFHTML

Papers citing "DeepFool: a simple and accurate method to fool deep neural networks"

50 / 2,298 papers shown
Title
Shape Defense Against Adversarial Attacks
Shape Defense Against Adversarial Attacks
Ali Borji
AAML
31
1
0
31 Aug 2020
Color and Edge-Aware Adversarial Image Perturbations
Color and Edge-Aware Adversarial Image Perturbations
R. Bassett
Mitchell Graves
Patrick Reilly
AAML
29
6
0
28 Aug 2020
Adversarially Robust Learning via Entropic Regularization
Adversarially Robust Learning via Entropic Regularization
Gauri Jagatap
Ameya Joshi
A. B. Chowdhury
S. Garg
Chinmay Hegde
OOD
128
11
0
27 Aug 2020
Measurement-driven Security Analysis of Imperceptible Impersonation
  Attacks
Measurement-driven Security Analysis of Imperceptible Impersonation Attacks
Shasha Li
K. Khalil
Yikang Shen
Chengyu Song
S. Krishnamurthy
Amit K. Roy-Chowdhury
A. Swami
AAML
36
2
0
26 Aug 2020
Adversarially Training for Audio Classifiers
Adversarially Training for Audio Classifiers
Raymel Alfonso Sallo
Mohammad Esmaeilpour
P. Cardinal
AAML
47
8
0
26 Aug 2020
Point Adversarial Self Mining: A Simple Method for Facial Expression
  Recognition
Point Adversarial Self Mining: A Simple Method for Facial Expression Recognition
Ping Liu
Yuewei Lin
Zibo Meng
Lu Lu
Weihong Deng
Qiufeng Wang
Yi Yang
94
27
0
26 Aug 2020
Yet Another Intermediate-Level Attack
Yet Another Intermediate-Level Attack
Qizhang Li
Yiwen Guo
Hao Chen
AAML
59
52
0
20 Aug 2020
Prevalence of Neural Collapse during the terminal phase of deep learning
  training
Prevalence of Neural Collapse during the terminal phase of deep learning training
Vardan Papyan
Xuemei Han
D. Donoho
261
582
0
18 Aug 2020
Improving adversarial robustness of deep neural networks by using
  semantic information
Improving adversarial robustness of deep neural networks by using semantic information
Lina Wang
Rui Tang
Yawei Yue
Xingshu Chen
Wei Wang
Yi Zhu
Xuemei Zeng
AAML
56
14
0
18 Aug 2020
A Deep Dive into Adversarial Robustness in Zero-Shot Learning
A Deep Dive into Adversarial Robustness in Zero-Shot Learning
M. K. Yucel
R. G. Cinbis
P. D. Sahin
VLM
68
7
0
17 Aug 2020
AP-Loss for Accurate One-Stage Object Detection
AP-Loss for Accurate One-Stage Object Detection
Kean Chen
Weiyao Lin
Jianguo Li
John See
Ji Wang
Junni Zou
ObjD
92
66
0
17 Aug 2020
Adversarial Concurrent Training: Optimizing Robustness and Accuracy
  Trade-off of Deep Neural Networks
Adversarial Concurrent Training: Optimizing Robustness and Accuracy Trade-off of Deep Neural Networks
Elahe Arani
F. Sarfraz
Bahram Zonooz
AAML
60
9
0
16 Aug 2020
Adversarial Filters for Secure Modulation Classification
Adversarial Filters for Secure Modulation Classification
A. Berian
K. Staab
N. Teku
G. Ditzler
T. Bose
Ravi Tandon
AAML
73
7
0
15 Aug 2020
On the Generalization Properties of Adversarial Training
On the Generalization Properties of Adversarial Training
Yue Xing
Qifan Song
Guang Cheng
AAML
78
34
0
15 Aug 2020
Generating Image Adversarial Examples by Embedding Digital Watermarks
Generating Image Adversarial Examples by Embedding Digital Watermarks
Yuexin Xiang
Tiantian Li
Wei Ren
Tianqing Zhu
K. Choo
AAMLWIGM
25
7
0
14 Aug 2020
Semantically Adversarial Learnable Filters
Semantically Adversarial Learnable Filters
Ali Shahin Shamsabadi
Changjae Oh
Andrea Cavallaro
GAN
87
6
0
13 Aug 2020
RGB cameras failures and their effects in autonomous driving
  applications
RGB cameras failures and their effects in autonomous driving applications
Andrea Ceccarelli
Francesco Secci
90
32
0
13 Aug 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
118
73
0
07 Aug 2020
Stronger and Faster Wasserstein Adversarial Attacks
Stronger and Faster Wasserstein Adversarial Attacks
Kaiwen Wu
Allen Wang
Yaoliang Yu
AAML
77
32
0
06 Aug 2020
Adv-watermark: A Novel Watermark Perturbation for Adversarial Examples
Adv-watermark: A Novel Watermark Perturbation for Adversarial Examples
Xiaojun Jia
Xingxing Wei
Xiaochun Cao
Xiaoguang Han
AAML
75
88
0
05 Aug 2020
TREND: Transferability based Robust ENsemble Design
TREND: Transferability based Robust ENsemble Design
Deepak Ravikumar
Sangamesh Kodge
Isha Garg
Kaushik Roy
OODAAML
35
4
0
04 Aug 2020
Eigen-CAM: Class Activation Map using Principal Components
Eigen-CAM: Class Activation Map using Principal Components
Mohammed Bany Muhammad
M. Yeasin
78
346
0
01 Aug 2020
Vulnerability Under Adversarial Machine Learning: Bias or Variance?
Vulnerability Under Adversarial Machine Learning: Bias or Variance?
Hossein Aboutalebi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
AAML
49
3
0
01 Aug 2020
On the Generalizability of Neural Program Models with respect to
  Semantic-Preserving Program Transformations
On the Generalizability of Neural Program Models with respect to Semantic-Preserving Program Transformations
Md Rafiqul Islam Rabin
Nghi D. Q. Bui
Ke Wang
Yijun Yu
Lingxiao Jiang
Mohammad Amin Alipour
154
90
0
31 Jul 2020
Towards Class-Oriented Poisoning Attacks Against Neural Networks
Towards Class-Oriented Poisoning Attacks Against Neural Networks
Bingyin Zhao
Yingjie Lao
SILMAAML
22
18
0
31 Jul 2020
TEAM: We Need More Powerful Adversarial Examples for DNNs
TEAM: We Need More Powerful Adversarial Examples for DNNs
Yaguan Qian
Xi-Ming Zhang
Bin Wang
Wei Li
Zhaoquan Gu
Haijiang Wang
Wassim Swaileh
AAML
58
0
0
31 Jul 2020
A Data Augmentation-based Defense Method Against Adversarial Attacks in
  Neural Networks
A Data Augmentation-based Defense Method Against Adversarial Attacks in Neural Networks
Yi Zeng
Han Qiu
G. Memmi
Meikang Qiu
AAML
66
50
0
30 Jul 2020
A General Framework For Detecting Anomalous Inputs to DNN Classifiers
A General Framework For Detecting Anomalous Inputs to DNN Classifiers
Jayaram Raghuram
Varun Chandrasekaran
S. Jha
Suman Banerjee
AAML
106
35
0
29 Jul 2020
End-to-End Adversarial White Box Attacks on Music Instrument
  Classification
End-to-End Adversarial White Box Attacks on Music Instrument Classification
Katharina Prinz
A. Flexer
AAML
26
0
0
29 Jul 2020
Stylized Adversarial Defense
Stylized Adversarial Defense
Muzammal Naseer
Salman Khan
Munawar Hayat
Fahad Shahbaz Khan
Fatih Porikli
GANAAML
80
16
0
29 Jul 2020
Cassandra: Detecting Trojaned Networks from Adversarial Perturbations
Cassandra: Detecting Trojaned Networks from Adversarial Perturbations
Xiaoyu Zhang
Ajmal Mian
Rohit Gupta
Nazanin Rahnavard
M. Shah
AAML
91
26
0
28 Jul 2020
From Sound Representation to Model Robustness
From Sound Representation to Model Robustness
Mohamad Esmaeilpour
P. Cardinal
Alessandro Lameiras Koerich
AAML
77
6
0
27 Jul 2020
AdvFoolGen: Creating Persistent Troubles for Deep Classifiers
AdvFoolGen: Creating Persistent Troubles for Deep Classifiers
Yuzhen Ding
Nupur Thakur
Baoxin Li
AAML
73
3
0
20 Jul 2020
DiffRNN: Differential Verification of Recurrent Neural Networks
DiffRNN: Differential Verification of Recurrent Neural Networks
Sara Mohammadinejad
Brandon Paulsen
Chao Wang
Jyotirmoy V. Deshmukh
111
12
0
20 Jul 2020
Robust Tracking against Adversarial Attacks
Robust Tracking against Adversarial Attacks
Shuai Jia
Chao Ma
Yibing Song
Xiaokang Yang
AAML
75
51
0
20 Jul 2020
Evaluating a Simple Retraining Strategy as a Defense Against Adversarial
  Attacks
Evaluating a Simple Retraining Strategy as a Defense Against Adversarial Attacks
Nupur Thakur
Yuzhen Ding
Baoxin Li
AAML
33
3
0
20 Jul 2020
Exploiting vulnerabilities of deep neural networks for privacy
  protection
Exploiting vulnerabilities of deep neural networks for privacy protection
Ricardo Sánchez-Matilla
C. Li
Ali Shahin Shamsabadi
Riccardo Mazzon
Andrea Cavallaro
AAMLPICV
56
24
0
19 Jul 2020
Technologies for Trustworthy Machine Learning: A Survey in a
  Socio-Technical Context
Technologies for Trustworthy Machine Learning: A Survey in a Socio-Technical Context
Ehsan Toreini
Mhairi Aitken
Kovila P. L. Coopamootoo
Karen Elliott
Vladimiro González-Zelaya
P. Missier
Magdalene Ng
Aad van Moorsel
74
18
0
17 Jul 2020
On Adversarial Robustness: A Neural Architecture Search perspective
On Adversarial Robustness: A Neural Architecture Search perspective
Chaitanya Devaguptapu
Devansh Agarwal
Gaurav Mittal
Pulkit Gopalani
V. Balasubramanian
OODAAML
68
34
0
16 Jul 2020
Odyssey: Creation, Analysis and Detection of Trojan Models
Odyssey: Creation, Analysis and Detection of Trojan Models
Marzieh Edraki
Nazmul Karim
Nazanin Rahnavard
Ajmal Mian
M. Shah
AAML
97
14
0
16 Jul 2020
Accelerating Robustness Verification of Deep Neural Networks Guided by
  Target Labels
Accelerating Robustness Verification of Deep Neural Networks Guided by Target Labels
Wenjie Wan
Zhaodi Zhang
Yiwei Zhu
Min Zhang
Fu Song
AAML
70
8
0
16 Jul 2020
Explicit Regularisation in Gaussian Noise Injections
Explicit Regularisation in Gaussian Noise Injections
A. Camuto
M. Willetts
Umut Simsekli
Stephen J. Roberts
Chris Holmes
100
59
0
14 Jul 2020
Towards a Theoretical Understanding of the Robustness of Variational
  Autoencoders
Towards a Theoretical Understanding of the Robustness of Variational Autoencoders
A. Camuto
M. Willetts
Stephen J. Roberts
Chris Holmes
Tom Rainforth
AAMLDRL
65
31
0
14 Jul 2020
Pasadena: Perceptually Aware and Stealthy Adversarial Denoise Attack
Pasadena: Perceptually Aware and Stealthy Adversarial Denoise Attack
Yupeng Cheng
Qing Guo
Felix Juefei Xu
Wei Feng
Shang-Wei Lin
Weisi Lin
Yang Liu
AAML
99
46
0
14 Jul 2020
Nested Learning For Multi-Granular Tasks
Nested Learning For Multi-Granular Tasks
Raphaël Achddou
J. Matias Di Martino
Guillermo Sapiro
26
1
0
13 Jul 2020
Understanding Adversarial Examples from the Mutual Influence of Images
  and Perturbations
Understanding Adversarial Examples from the Mutual Influence of Images and Perturbations
Chaoning Zhang
Philipp Benz
Tooba Imtiaz
In-So Kweon
SSLAAML
83
119
0
13 Jul 2020
Probabilistic Jacobian-based Saliency Maps Attacks
Probabilistic Jacobian-based Saliency Maps Attacks
Théo Combey
António Loison
Maxime Faucher
H. Hajri
AAML
110
19
0
12 Jul 2020
Representation Learning via Adversarially-Contrastive Optimal Transport
Representation Learning via Adversarially-Contrastive Optimal Transport
A. Cherian
Shuchin Aeron
OT
43
7
0
11 Jul 2020
Boundary thickness and robustness in learning models
Boundary thickness and robustness in learning models
Yaoqing Yang
Rekha Khanna
Yaodong Yu
A. Gholami
Kurt Keutzer
Joseph E. Gonzalez
Kannan Ramchandran
Michael W. Mahoney
OOD
72
42
0
09 Jul 2020
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning
Hongyi Wang
Kartik K. Sreenivasan
Shashank Rajput
Harit Vishwakarma
Saurabh Agarwal
Jy-yong Sohn
Kangwook Lee
Dimitris Papailiopoulos
FedML
112
616
0
09 Jul 2020
Previous
123...262728...444546
Next