ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.03198
15
2

Regional Image Perturbation Reduces LpL_pLp​ Norms of Adversarial Examples While Maintaining Model-to-model Transferability

7 July 2020
Utku Ozbulak
Jonathan Peck
W. D. Neve
Bart Goossens
Yvan Saeys
Arnout Van Messem
    AAML
ArXivPDFHTML
Abstract

Regional adversarial attacks often rely on complicated methods for generating adversarial perturbations, making it hard to compare their efficacy against well-known attacks. In this study, we show that effective regional perturbations can be generated without resorting to complex methods. We develop a very simple regional adversarial perturbation attack method using cross-entropy sign, one of the most commonly used losses in adversarial machine learning. Our experiments on ImageNet with multiple models reveal that, on average, 76%76\%76% of the generated adversarial examples maintain model-to-model transferability when the perturbation is applied to local image regions. Depending on the selected region, these localized adversarial examples require significantly less LpL_pLp​ norm distortion (for p∈{0,2,∞}p \in \{0, 2, \infty\}p∈{0,2,∞}) compared to their non-local counterparts. These localized attacks therefore have the potential to undermine defenses that claim robustness under the aforementioned norms.

View on arXiv
Comments on this paper