ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.03880
33
24
v1v2v3 (latest)

Combating Adversarial Attacks Using Sparse Representations

11 March 2018
S. Gopalakrishnan
Zhinus Marzi
Upamanyu Madhow
Ramtin Pedarsani
    AAML
ArXiv (abs)PDFHTML
Abstract

It is by now well-known that small adversarial perturbations can induce classification errors in deep neural networks (DNNs). In this paper, we make the case that sparse representations of the input data are a crucial tool for combating such attacks. For linear classifiers, we show that a sparsifying front end is provably effective against ℓ∞\ell_{\infty}ℓ∞​-bounded attacks, reducing output distortion due to the attack by a factor of roughly K/NK / NK/N where NNN is the data dimension and KKK is the sparsity level. We then extend this concept to DNNs, showing that a "locally linear" model can be used to develop a theoretical foundation for crafting attacks and defenses. Experimental results for the MNIST dataset show the efficacy of the proposed sparsifying front end.

View on arXiv
Comments on this paper