ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.10343
13
0

Measuring the Transferability of ℓ∞\ell_\inftyℓ∞​ Attacks by the ℓ2\ell_2ℓ2​ Norm

20 February 2021
Sizhe Chen
Qinghua Tao
Zhixing Ye
Xiaolin Huang
ArXivPDFHTML
Abstract

Deep neural networks could be fooled by adversarial examples with trivial differences to original samples. To keep the difference imperceptible in human eyes, researchers bound the adversarial perturbations by the ℓ∞\ell_\inftyℓ∞​ norm, which is now commonly served as the standard to align the strength of different attacks for a fair comparison. However, we propose that using the ℓ∞\ell_\inftyℓ∞​ norm alone is not sufficient in measuring the attack strength, because even with a fixed ℓ∞\ell_\inftyℓ∞​ distance, the ℓ2\ell_2ℓ2​ distance also greatly affects the attack transferability between models. Through the discovery, we reach more in-depth understandings towards the attack mechanism, i.e., several existing methods attack black-box models better partly because they craft perturbations with 70% to 130% larger ℓ2\ell_2ℓ2​ distances. Since larger perturbations naturally lead to better transferability, we thereby advocate that the strength of attacks should be simultaneously measured by both the ℓ∞\ell_\inftyℓ∞​ and ℓ2\ell_2ℓ2​ norm. Our proposal is firmly supported by extensive experiments on ImageNet dataset from 7 attacks, 4 white-box models, and 9 black-box models.

View on arXiv
Comments on this paper