ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.14173
41
5

How many dimensions are required to find an adversarial example?

24 March 2023
Charles Godfrey
Henry Kvinge
Elise Bishoff
Myles Mckay
Davis Brown
T. Doster
E. Byler
    AAML
ArXivPDFHTML
Abstract

Past work exploring adversarial vulnerability have focused on situations where an adversary can perturb all dimensions of model input. On the other hand, a range of recent works consider the case where either (i) an adversary can perturb a limited number of input parameters or (ii) a subset of modalities in a multimodal problem. In both of these cases, adversarial examples are effectively constrained to a subspace VVV in the ambient input space X\mathcal{X}X. Motivated by this, in this work we investigate how adversarial vulnerability depends on dim⁡(V)\dim(V)dim(V). In particular, we show that the adversarial success of standard PGD attacks with ℓp\ell^pℓp norm constraints behaves like a monotonically increasing function of ϵ(dim⁡(V)dim⁡X)1q\epsilon (\frac{\dim(V)}{\dim \mathcal{X}})^{\frac{1}{q}}ϵ(dimXdim(V)​)q1​ where ϵ\epsilonϵ is the perturbation budget and 1p+1q=1\frac{1}{p} + \frac{1}{q} =1p1​+q1​=1, provided p>1p > 1p>1 (the case p=1p=1p=1 presents additional subtleties which we analyze in some detail). This functional form can be easily derived from a simple toy linear model, and as such our results land further credence to arguments that adversarial examples are endemic to locally linear models on high dimensional spaces.

View on arXiv
Comments on this paper