Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1712.06646
Cited By
When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time
18 December 2017
David J. Miller
Yujia Wang
G. Kesidis
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time"
9 / 9 papers shown
Title
TAD: Transfer Learning-based Multi-Adversarial Detection of Evasion Attacks against Network Intrusion Detection Systems
Islam Debicha
Richard Bauwens
Thibault Debatty
Jean-Michel Dricot
Tayeb Kenaza
Wim Mees
AAML
24
40
0
27 Oct 2022
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them
Florian Tramèr
AAML
30
65
0
24 Jul 2021
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers
Xi Li
David J. Miller
Zhen Xiang
G. Kesidis
AAML
16
0
0
28 May 2021
Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing
Zhen Xiang
David J. Miller
G. Kesidis
35
22
0
15 Oct 2020
Anomalous Example Detection in Deep Learning: A Survey
Saikiran Bulusu
B. Kailkhura
Bo-wen Li
P. Varshney
D. Song
AAML
28
47
0
16 Mar 2020
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic
Zhen Xiang
David J. Miller
Hang Wang
G. Kesidis
AAML
34
9
0
18 Nov 2019
Detection of Backdoors in Trained Classifiers Without Access to the Training Set
Zhen Xiang
David J. Miller
G. Kesidis
AAML
30
23
0
27 Aug 2019
Motivating the Rules of the Game for Adversarial Example Research
Justin Gilmer
Ryan P. Adams
Ian Goodfellow
David G. Andersen
George E. Dahl
AAML
50
226
0
18 Jul 2018
Generalizable Adversarial Examples Detection Based on Bi-model Decision Mismatch
João Monteiro
Isabela Albuquerque
Zahid Akhtar
T. Falk
AAML
46
29
0
21 Feb 2018
1