ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.12827
43
70

Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints

25 February 2021
Maura Pintor
Fabio Roli
Wieland Brendel
Battista Biggio
    AAML
ArXivPDFHTML
Abstract

Evaluating adversarial robustness amounts to finding the minimum perturbation needed to have an input sample misclassified. The inherent complexity of the underlying optimization requires current gradient-based attacks to be carefully tuned, initialized, and possibly executed for many computationally-demanding iterations, even if specialized to a given perturbation model. In this work, we overcome these limitations by proposing a fast minimum-norm (FMN) attack that works with different ℓp\ell_pℓp​-norm perturbation models (p=0,1,2,∞p=0, 1, 2, \inftyp=0,1,2,∞), is robust to hyperparameter choices, does not require adversarial starting points, and converges within few lightweight steps. It works by iteratively finding the sample misclassified with maximum confidence within an ℓp\ell_pℓp​-norm constraint of size ϵ\epsilonϵ, while adapting ϵ\epsilonϵ to minimize the distance of the current sample to the decision boundary. Extensive experiments show that FMN significantly outperforms existing attacks in terms of convergence speed and computation time, while reporting comparable or even smaller perturbation sizes.

View on arXiv
Comments on this paper