ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1709.04114
22
636

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

13 September 2017
Pin-Yu Chen
Yash Sharma
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
    AAML
ArXivPDFHTML
Abstract

Recent studies have highlighted the vulnerability of deep neural networks (DNNs) to adversarial examples - a visually indistinguishable adversarial image can easily be crafted to cause a well-trained model to misclassify. Existing methods for crafting adversarial examples are based on L2L_2L2​ and L∞L_\inftyL∞​ distortion metrics. However, despite the fact that L1L_1L1​ distortion accounts for the total variation and encourages sparsity in the perturbation, little has been developed for crafting L1L_1L1​-based adversarial examples. In this paper, we formulate the process of attacking DNNs via adversarial examples as an elastic-net regularized optimization problem. Our elastic-net attacks to DNNs (EAD) feature L1L_1L1​-oriented adversarial examples and include the state-of-the-art L2L_2L2​ attack as a special case. Experimental results on MNIST, CIFAR10 and ImageNet show that EAD can yield a distinct set of adversarial examples with small L1L_1L1​ distortion and attains similar attack performance to the state-of-the-art methods in different attack scenarios. More importantly, EAD leads to improved attack transferability and complements adversarial training for DNNs, suggesting novel insights on leveraging L1L_1L1​ distortion in adversarial machine learning and security implications of DNNs.

View on arXiv
Comments on this paper