By adding carefully crafted perturbations to input images, adversarial examples (AEs) can be generated to mislead neural-network-based image classifiers. adversarial perturbations by Carlini and Wagner (CW) are among the most effective but difficult-to-detect attacks. While many countermeasures against AEs have been proposed, detection of adaptive CW- AEs is still an open question. We find that, by randomly erasing some pixels in an AE and then restoring it with an inpainting technique, the AE, before and after the steps, tends to have different classification results, while a benign sample does not show this symptom. We thus propose a novel AE detection technique, Erase-and-Restore (E&R), that exploits the intriguing sensitivity of attacks. Experiments conducted on two popular image datasets, CIFAR-10 and ImageNet, show that the proposed technique is able to detect over 98% of AEs and has a very low false positive rate on benign images. The detection technique exhibits high transferability: a detection system trained using CW- AEs can accurately detect AEs generated using another attack method. More importantly, our approach demonstrates strong resilience to adaptive attacks, filling a critical gap in AE detection. Finally, we interpret the detection technique through both visualization and quantification.
View on arXiv