ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.04310
48
21
v1v2 (latest)

GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing

9 June 2022
Zhongkai Hao
Chengyang Ying
Yinpeng Dong
Hang Su
Jun Zhu
Jian Song
    AAML
ArXiv (abs)PDFHTML
Abstract

Certified defenses such as randomized smoothing have shown promise towards building reliable machine learning systems against ℓp\ell_pℓp​-norm bounded attacks. However, existing methods are insufficient or unable to provably defend against semantic transformations, especially those without closed-form expressions (such as defocus blur and pixelate), which are more common in practice and often unrestricted. To fill up this gap, we propose generalized randomized smoothing (GSmooth), a unified theoretical framework for certifying robustness against general semantic transformations via a novel dimension augmentation strategy. Under the GSmooth framework, we present a scalable algorithm that uses a surrogate image-to-image network to approximate the complex transformation. The surrogate model provides a powerful tool for studying the properties of semantic transformations and certifying robustness. Experimental results on several datasets demonstrate the effectiveness of our approach for robustness certification against multiple kinds of semantic transformations and corruptions, which is not achievable by the alternative baselines.

View on arXiv
Comments on this paper