ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.09277
23
55

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness

17 November 2021
Jongheon Jeong
Sejun Park
Minkyu Kim
Heung-Chang Lee
Do-Guk Kim
Jinwoo Shin
    AAML
ArXivPDFHTML
Abstract

Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against ℓ2\ell_2ℓ2​-adversarial perturbations. Under the paradigm, the robustness of a classifier is aligned with the prediction confidence, i.e., the higher confidence from a smoothed classifier implies the better robustness. This motivates us to rethink the fundamental trade-off between accuracy and robustness in terms of calibrating confidences of a smoothed classifier. In this paper, we propose a simple training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup: it trains on convex combinations of samples along the direction of adversarial perturbation for each input. The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness in case of smoothed classifiers, and offers an intuitive way to adaptively set a new decision boundary between these samples for better robustness. Our experimental results demonstrate that the proposed method can significantly improve the certified ℓ2\ell_2ℓ2​-robustness of smoothed classifiers compared to existing state-of-the-art robust training methods.

View on arXiv
Comments on this paper