ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.05793
81
191
v1v2 (latest)

Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing

14 December 2018
Jingyi Wang
Guoliang Dong
Jun Sun
Xinyu Wang
Peixin Zhang
    AAML
ArXiv (abs)PDFHTML
Abstract

Deep neural networks (DNN) have been shown to be useful in a wide range of applications. However, they are also known to be vulnerable to adversarial samples. By transforming a normal sample with some carefully crafted human non-perceptible perturbations, even highly accurate DNN makes wrong decisions. Multiple defense mechanisms have been proposed which aim to hinder the generation of such adversarial samples. However, a recent work show that most of them are ineffective. In this work, we propose an alternative approach to detect adversarial samples at runtime. Our main observation is that adversarial samples are much more sensitive than normal samples if we impose random mutations on the DNN. We thus first propose a measure of `sensitivity' and show empirically that normal samples and adversarial samples have distinguishable sensitivity. We then integrate statistical model checking and mutation testing to check whether an input sample is likely to be normal or adversarial at runtime by measuring its sensitivity. We evaluated our approach on the MNIST and CIFAR10 dataset. The results show that our approach detects adversarial samples generated by state-of-art attacking methods efficiently and accurately.

View on arXiv
Comments on this paper