ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.03239
7
94

Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness

8 February 2020
Aounon Kumar
Alexander Levine
Tom Goldstein
S. Feizi
ArXivPDFHTML
Abstract

Randomized smoothing, using just a simple isotropic Gaussian distribution, has been shown to produce good robustness guarantees against ℓ2\ell_2ℓ2​-norm bounded adversaries. In this work, we show that extending the smoothing technique to defend against other attack models can be challenging, especially in the high-dimensional regime. In particular, for a vast class of i.i.d.~smoothing distributions, we prove that the largest ℓp\ell_pℓp​-radius that can be certified decreases as O(1/d12−1p)O(1/d^{\frac{1}{2} - \frac{1}{p}})O(1/d21​−p1​) with dimension ddd for p>2p > 2p>2. Notably, for p≥2p \geq 2p≥2, this dependence on ddd is no better than that of the ℓp\ell_pℓp​-radius that can be certified using isotropic Gaussian smoothing, essentially putting a matching lower bound on the robustness radius. When restricted to {\it generalized} Gaussian smoothing, these two bounds can be shown to be within a constant factor of each other in an asymptotic sense, establishing that Gaussian smoothing provides the best possible results, up to a constant factor, when p≥2p \geq 2p≥2. We present experimental results on CIFAR to validate our theory. For other smoothing distributions, such as, a uniform distribution within an ℓ1\ell_1ℓ1​ or an ℓ∞\ell_\inftyℓ∞​-norm ball, we show upper bounds of the form O(1/d)O(1 / d)O(1/d) and O(1/d1−1p)O(1 / d^{1 - \frac{1}{p}})O(1/d1−p1​) respectively, which have an even worse dependence on ddd.

View on arXiv
Comments on this paper