ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.19460
96
9
v1v2v3 (latest)

AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples

30 April 2024
Antonio Emanuele Cinà
Jérôme Rony
Maura Pintor
Christian Scano
Ambra Demontis
Battista Biggio
Ismail Ben Ayed
Fabio Roli
    ELMAAMLSILM
ArXiv (abs)PDFHTML
Main:6 Pages
13 Figures
Bibliography:3 Pages
17 Tables
Appendix:18 Pages
Abstract

Adversarial examples are typically optimized with gradient-based attacks. While novel attacks are continuously proposed, each is shown to outperform its predecessors using different experimental setups, hyperparameter settings, and number of forward and backward calls to the target models. This provides overly-optimistic and even biased evaluations that may unfairly favor one particular attack over the others. In this work, we aim to overcome these limitations by proposing AttackBench, i.e., the first evaluation framework that enables a fair comparison among different attacks. To this end, we first propose a categorization of gradient-based attacks, identifying their main components and differences. We then introduce our framework, which evaluates their effectiveness and efficiency. We measure these characteristics by (i) defining an optimality metric that quantifies how close an attack is to the optimal solution, and (ii) limiting the number of forward and backward queries to the model, such that all attacks are compared within a given maximum query budget. Our extensive experimental analysis compares more than 100100100 attack implementations with a total of over 800800800 different configurations against CIFAR-10 and ImageNet models, highlighting that only very few attacks outperform all the competing approaches. Within this analysis, we shed light on several implementation issues that prevent many attacks from finding better solutions or running at all. We release AttackBench as a publicly-available benchmark, aiming to continuously update it to include and evaluate novel gradient-based attacks for optimizing adversarial examples.

View on arXiv
@article{cinà2025_2404.19460,
  title={ AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples },
  author={ Antonio Emanuele Cinà and Jérôme Rony and Maura Pintor and Luca Demetrio and Ambra Demontis and Battista Biggio and Ismail Ben Ayed and Fabio Roli },
  journal={arXiv preprint arXiv:2404.19460},
  year={ 2025 }
}
Comments on this paper