ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.02044
24
474

Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack

3 July 2019
Francesco Croce
Matthias Hein
    AAML
ArXivPDFHTML
Abstract

The evaluation of robustness against adversarial manipulation of neural networks-based classifiers is mainly tested with empirical attacks as methods for the exact computation, even when available, do not scale to large networks. We propose in this paper a new white-box adversarial attack wrt the lpl_plp​-norms for p∈{1,2,∞}p \in \{1,2,\infty\}p∈{1,2,∞} aiming at finding the minimal perturbation necessary to change the class of a given input. It has an intuitive geometric meaning, yields quickly high quality results, minimizes the size of the perturbation (so that it returns the robust accuracy at every threshold with a single run). It performs better or similar to state-of-the-art attacks which are partially specialized to one lpl_plp​-norm, and is robust to the phenomenon of gradient masking.

View on arXiv
Comments on this paper