Attacking the Madry Defense Model with -based Adversarial Examples

The Madry Lab recently hosted a competition designed to test the robustness of their adversarially trained MNIST model. Attacks were constrained to perturb each pixel of the input image by a scaled maximal distortion = 0.3. This discourages the use of attacks which are not optimized on the distortion metric. Our experimental results demonstrate that by relaxing the constraint of the competition, the elastic-net attack to deep neural networks (EAD) can generate transferable adversarial examples which, despite their high average distortion, have minimal visual distortion. These results call into question the use of as a sole measure for visual distortion, and further demonstrate the power of EAD at generating robust adversarial examples.
View on arXiv