ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.08118
21
208

Randomized Smoothing of All Shapes and Sizes

19 February 2020
Greg Yang
Tony Duan
J. E. Hu
Hadi Salman
Ilya P. Razenshteyn
Jungshian Li
    AAML
ArXivPDFHTML
Abstract

Randomized smoothing is the current state-of-the-art defense with provable robustness against ℓ2\ell_2ℓ2​ adversarial attacks. Many works have devised new randomized smoothing schemes for other metrics, such as ℓ1\ell_1ℓ1​ or ℓ∞\ell_\inftyℓ∞​; however, substantial effort was needed to derive such new guarantees. This begs the question: can we find a general theory for randomized smoothing? We propose a novel framework for devising and analyzing randomized smoothing schemes, and validate its effectiveness in practice. Our theoretical contributions are: (1) we show that for an appropriate notion of "optimal", the optimal smoothing distributions for any "nice" norms have level sets given by the norm's *Wulff Crystal*; (2) we propose two novel and complementary methods for deriving provably robust radii for any smoothing distribution; and, (3) we show fundamental limits to current randomized smoothing techniques via the theory of *Banach space cotypes*. By combining (1) and (2), we significantly improve the state-of-the-art certified accuracy in ℓ1\ell_1ℓ1​ on standard datasets. Meanwhile, we show using (3) that with only label statistics under random input perturbations, randomized smoothing cannot achieve nontrivial certified accuracy against perturbations of ℓp\ell_pℓp​-norm Ω(min⁡(1,d1p−12))\Omega(\min(1, d^{\frac{1}{p} - \frac{1}{2}}))Ω(min(1,dp1​−21​)), when the input dimension ddd is large. We provide code in github.com/tonyduan/rs4a.

View on arXiv
Comments on this paper