ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.02658
  4. Cited By
When Not to Classify: Detection of Reverse Engineering Attacks on DNN
  Image Classifiers

When Not to Classify: Detection of Reverse Engineering Attacks on DNN Image Classifiers

31 October 2018
Yujia Wang
David J. Miller
M. Schaar
    AAML
ArXiv (abs)PDFHTML

Papers citing "When Not to Classify: Detection of Reverse Engineering Attacks on DNN Image Classifiers"

4 / 4 papers shown
Title
A BIC-based Mixture Model Defense against Data Poisoning Attacks on
  Classifiers
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers
Xi Li
David J. Miller
Zhen Xiang
G. Kesidis
AAML
38
0
0
28 May 2021
Detection of Backdoors in Trained Classifiers Without Access to the
  Training Set
Detection of Backdoors in Trained Classifiers Without Access to the Training Set
Zhen Xiang
David J. Miller
G. Kesidis
AAML
103
24
0
27 Aug 2019
Adversarial Learning in Statistical Classification: A Comprehensive
  Review of Defenses Against Attacks
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
David J. Miller
Zhen Xiang
G. Kesidis
AAML
74
35
0
12 Apr 2019
When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN
  Classifiers at Test Time
When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time
David J. Miller
Yujia Wang
G. Kesidis
AAML
55
44
0
18 Dec 2017
1