ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.00530
  4. Cited By
Early Methods for Detecting Adversarial Images

Early Methods for Detecting Adversarial Images

1 August 2016
Dan Hendrycks
Kevin Gimpel
    AAML
ArXivPDFHTML

Papers citing "Early Methods for Detecting Adversarial Images"

50 / 101 papers shown
Title
DropCluster: A structured dropout for convolutional networks
DropCluster: A structured dropout for convolutional networks
Liyang Chen
P. Gautier
Sergul Aydore
25
11
0
07 Feb 2020
RAID: Randomized Adversarial-Input Detection for Neural Networks
RAID: Randomized Adversarial-Input Detection for Neural Networks
Hasan Ferit Eniser
M. Christakis
Valentin Wüstholz
AAML
25
15
0
07 Feb 2020
GhostImage: Remote Perception Attacks against Camera-based Image
  Classification Systems
GhostImage: Remote Perception Attacks against Camera-based Image Classification Systems
Yanmao Man
Ming Li
Ryan M. Gerdes
AAML
22
8
0
21 Jan 2020
DLA: Dense-Layer-Analysis for Adversarial Example Detection
DLA: Dense-Layer-Analysis for Adversarial Example Detection
Philip Sperl
Ching-yu Kao
Peng Chen
Konstantin Böttinger
AAML
11
34
0
05 Nov 2019
Confidence-Calibrated Adversarial Training: Generalizing to Unseen
  Attacks
Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks
David Stutz
Matthias Hein
Bernt Schiele
AAML
25
5
0
14 Oct 2019
Covariance-free Partial Least Squares: An Incremental Dimensionality
  Reduction Method
Covariance-free Partial Least Squares: An Incremental Dimensionality Reduction Method
Artur Jordão
M. Lie
V. H. C. Melo
William Robson Schwartz
23
3
0
05 Oct 2019
Towards neural networks that provably know when they don't know
Towards neural networks that provably know when they don't know
Alexander Meinke
Matthias Hein
OODD
33
139
0
26 Sep 2019
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
Han Xu
Yao Ma
Haochen Liu
Debayan Deb
Hui Liu
Jiliang Tang
Anil K. Jain
AAML
33
669
0
17 Sep 2019
Detecting and Diagnosing Adversarial Images with Class-Conditional
  Capsule Reconstructions
Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions
Yao Qin
Nicholas Frosst
S. Sabour
Colin Raffel
G. Cottrell
Geoffrey E. Hinton
GAN
AAML
19
71
0
05 Jul 2019
ML-LOO: Detecting Adversarial Examples with Feature Attribution
ML-LOO: Detecting Adversarial Examples with Feature Attribution
Puyudi Yang
Jianbo Chen
Cho-Jui Hsieh
Jane-ling Wang
Michael I. Jordan
AAML
22
101
0
08 Jun 2019
GAT: Generative Adversarial Training for Adversarial Example Detection
  and Robust Classification
GAT: Generative Adversarial Training for Adversarial Example Detection and Robust Classification
Xuwang Yin
Soheil Kolouri
Gustavo K. Rohde
AAML
30
43
0
27 May 2019
Moving Target Defense for Deep Visual Sensing against Adversarial
  Examples
Moving Target Defense for Deep Visual Sensing against Adversarial Examples
Qun Song
Zhenyu Yan
Rui Tan
AAML
21
20
0
11 May 2019
Adversarial Learning in Statistical Classification: A Comprehensive
  Review of Defenses Against Attacks
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
David J. Miller
Zhen Xiang
G. Kesidis
AAML
19
35
0
12 Apr 2019
Benchmarking Neural Network Robustness to Common Corruptions and
  Perturbations
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
Dan Hendrycks
Thomas G. Dietterich
OOD
VLM
12
3,358
0
28 Mar 2019
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial
  Perturbations
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations
Saeid Asgari Taghanaki
Kumar Abhishek
Shekoofeh Azizi
Ghassan Hamarneh
AAML
31
40
0
03 Mar 2019
Natural and Adversarial Error Detection using Invariance to Image
  Transformations
Natural and Adversarial Error Detection using Invariance to Image Transformations
Yuval Bahat
Michal Irani
Gregory Shakhnarovich
AAML
9
18
0
01 Feb 2019
Using Pre-Training Can Improve Model Robustness and Uncertainty
Using Pre-Training Can Improve Model Robustness and Uncertainty
Dan Hendrycks
Kimin Lee
Mantas Mazeika
NoLa
34
721
0
28 Jan 2019
A Black-box Attack on Neural Networks Based on Swarm Evolutionary
  Algorithm
A Black-box Attack on Neural Networks Based on Swarm Evolutionary Algorithm
Xiaolei Liu
Yuheng Luo
Xiaosong Zhang
Qingxin Zhu
AAML
24
16
0
26 Jan 2019
A Survey of Safety and Trustworthiness of Deep Neural Networks:
  Verification, Testing, Adversarial Attack and Defence, and Interpretability
A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability
Xiaowei Huang
Daniel Kroening
Wenjie Ruan
Marta Kwiatkowska
Youcheng Sun
Emese Thamo
Min Wu
Xinping Yi
AAML
24
50
0
18 Dec 2018
Spartan Networks: Self-Feature-Squeezing Neural Networks for increased
  robustness in adversarial settings
Spartan Networks: Self-Feature-Squeezing Neural Networks for increased robustness in adversarial settings
François Menet
Paul Berthier
José M. Fernandez
M. Gagnon
AAML
15
10
0
17 Dec 2018
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
182
289
0
02 Dec 2018
Detecting Adversarial Perturbations Through Spatial Behavior in
  Activation Spaces
Detecting Adversarial Perturbations Through Spatial Behavior in Activation Spaces
Ziv Katzir
Yuval Elovici
AAML
11
26
0
22 Nov 2018
Adversarial Examples - A Complete Characterisation of the Phenomenon
Adversarial Examples - A Complete Characterisation of the Phenomenon
A. Serban
E. Poll
Joost Visser
SILM
AAML
33
49
0
02 Oct 2018
Adversarial Examples: Opportunities and Challenges
Adversarial Examples: Opportunities and Challenges
Jiliang Zhang
Chen Li
AAML
17
233
0
13 Sep 2018
Bridging machine learning and cryptography in defence against
  adversarial attacks
Bridging machine learning and cryptography in defence against adversarial attacks
O. Taran
Shideh Rezaeifar
Slava Voloshynovskiy
AAML
15
22
0
05 Sep 2018
Mitigation of Adversarial Attacks through Embedded Feature Selection
Mitigation of Adversarial Attacks through Embedded Feature Selection
Ziyi Bao
Luis Muñoz-González
Emil C. Lupu
AAML
17
1
0
16 Aug 2018
Simultaneous Adversarial Training - Learn from Others Mistakes
Simultaneous Adversarial Training - Learn from Others Mistakes
Zukang Liao
AAML
GAN
22
4
0
21 Jul 2018
Motivating the Rules of the Game for Adversarial Example Research
Motivating the Rules of the Game for Adversarial Example Research
Justin Gilmer
Ryan P. Adams
Ian Goodfellow
David G. Andersen
George E. Dahl
AAML
50
226
0
18 Jul 2018
Implicit Generative Modeling of Random Noise during Training for
  Adversarial Robustness
Implicit Generative Modeling of Random Noise during Training for Adversarial Robustness
Priyadarshini Panda
Kaushik Roy
AAML
22
4
0
05 Jul 2018
Benchmarking Neural Network Robustness to Common Corruptions and Surface
  Variations
Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations
Dan Hendrycks
Thomas G. Dietterich
OOD
22
197
0
04 Jul 2018
Gradient Similarity: An Explainable Approach to Detect Adversarial
  Attacks against Deep Learning
Gradient Similarity: An Explainable Approach to Detect Adversarial Attacks against Deep Learning
J. Dhaliwal
S. Shintre
AAML
31
15
0
27 Jun 2018
Detection based Defense against Adversarial Examples from the
  Steganalysis Point of View
Detection based Defense against Adversarial Examples from the Steganalysis Point of View
Jiayang Liu
Weiming Zhang
Yiwei Zhang
Dongdong Hou
Yujia Liu
Hongyue Zha
Nenghai Yu
AAML
25
99
0
21 Jun 2018
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using
  Generative Models
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
Pouya Samangouei
Maya Kabkab
Rama Chellappa
AAML
GAN
29
1,168
0
17 May 2018
Breaking Transferability of Adversarial Samples with Randomness
Breaking Transferability of Adversarial Samples with Randomness
Yan Zhou
Murat Kantarcioglu
B. Xi
AAML
27
12
0
11 May 2018
Defending against Adversarial Images using Basis Functions
  Transformations
Defending against Adversarial Images using Basis Functions Transformations
Uri Shaham
J. Garritano
Yutaro Yamada
Ethan Weinberger
A. Cloninger
Xiuyuan Cheng
Kelly P. Stanton
Y. Kluger
AAML
24
57
0
28 Mar 2018
Clipping free attacks against artificial neural networks
Clipping free attacks against artificial neural networks
B. Addad
Jérôme Kodjabachian
Christophe Meyer
AAML
11
1
0
26 Mar 2018
Detecting Adversarial Perturbations with Saliency
Detecting Adversarial Perturbations with Saliency
Chiliang Zhang
Zhimou Yang
Zuochang Ye
AAML
6
32
0
23 Mar 2018
Attack Strength vs. Detectability Dilemma in Adversarial Machine
  Learning
Attack Strength vs. Detectability Dilemma in Adversarial Machine Learning
Christopher Frederickson
Michael Moore
Glenn Dawson
R. Polikar
AAML
16
32
0
20 Feb 2018
Divide, Denoise, and Defend against Adversarial Attacks
Divide, Denoise, and Defend against Adversarial Attacks
Seyed-Mohsen Moosavi-Dezfooli
A. Shrivastava
Oncel Tuzel
AAML
32
45
0
19 Feb 2018
Certified Robustness to Adversarial Examples with Differential Privacy
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
SILM
AAML
56
926
0
09 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
98
3,160
0
01 Feb 2018
Adversarial Examples: Attacks and Defenses for Deep Learning
Adversarial Examples: Attacks and Defenses for Deep Learning
Xiaoyong Yuan
Pan He
Qile Zhu
Xiaolin Li
SILM
AAML
36
1,612
0
19 Dec 2017
When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN
  Classifiers at Test Time
When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time
David J. Miller
Yujia Wang
G. Kesidis
AAML
18
43
0
18 Dec 2017
Provably Minimally-Distorted Adversarial Examples
Provably Minimally-Distorted Adversarial Examples
Nicholas Carlini
Guy Katz
Clark W. Barrett
D. Dill
AAML
33
89
0
29 Sep 2017
Mitigating Evasion Attacks to Deep Neural Networks via Region-based
  Classification
Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
17
208
0
17 Sep 2017
Learning Universal Adversarial Perturbations with Generative Models
Learning Universal Adversarial Perturbations with Generative Models
Jamie Hayes
G. Danezis
AAML
15
54
0
17 Aug 2017
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong
Warren He
James Wei
Xinyun Chen
Nicholas Carlini
D. Song
AAML
43
242
0
15 Jun 2017
Detecting Adversarial Image Examples in Deep Networks with Adaptive
  Noise Reduction
Detecting Adversarial Image Examples in Deep Networks with Adaptive Noise Reduction
Bin Liang
Hongcheng Li
Miaoqiang Su
Xirong Li
Wenchang Shi
Xiaofeng Wang
AAML
14
216
0
23 May 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
61
1,842
0
20 May 2017
Enhancing Robustness of Machine Learning Systems via Data
  Transformations
Enhancing Robustness of Machine Learning Systems via Data Transformations
A. Bhagoji
Daniel Cullina
Chawin Sitawarin
Prateek Mittal
AAML
17
231
0
09 Apr 2017
Previous
123
Next