99

A Statistical Defense Approach for Detecting Adversarial Examples

Pattern Recognition in Information Systems (PRIS), 2019
Abstract

Adversarial examples are maliciously modified inputs created to fool deep neural networks (DNN). The discovery of such inputs presents a major issue to the expansion of DNN-based solutions. Many researchers have already contributed to the topic, providing both cutting edge-attack techniques and various defensive strategies. In this work, we focus on the development of a system capable of detecting adversarial samples by exploiting statistical information from the training-set. Our detector computes several distorted replicas of the test input, then collects the classifier's prediction vectors to build a meaningful signature for the detection task. Then, the signature is projected onto the class-specific statistic vector to infer the input's nature. The classification output of the original input is used to select the class-statistic vector. We show that our method reliably detects malicious inputs, outperforming state-of-the-art approaches in various settings, while being complementary to other defensive solutions.

View on arXiv
Comments on this paper