ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.09899
24
92

Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing

20 December 2019
Jinyuan Jia
Xiaoyu Cao
Binghui Wang
Neil Zhenqiang Gong
    AAML
ArXivPDFHTML
Abstract

It is well-known that classifiers are vulnerable to adversarial perturbations. To defend against adversarial perturbations, various certified robustness results have been derived. However, existing certified robustnesses are limited to top-1 predictions. In many real-world applications, top-kkk predictions are more relevant. In this work, we aim to derive certified robustness for top-kkk predictions. In particular, our certified robustness is based on randomized smoothing, which turns any classifier to a new classifier via adding noise to an input example. We adopt randomized smoothing because it is scalable to large-scale neural networks and applicable to any classifier. We derive a tight robustness in ℓ2\ell_2ℓ2​ norm for top-kkk predictions when using randomized smoothing with Gaussian noise. We find that generalizing the certified robustness from top-1 to top-kkk predictions faces significant technical challenges. We also empirically evaluate our method on CIFAR10 and ImageNet. For example, our method can obtain an ImageNet classifier with a certified top-5 accuracy of 62.8\% when the ℓ2\ell_2ℓ2​-norms of the adversarial perturbations are less than 0.5 (=127/255). Our code is publicly available at: \url{https://github.com/jjy1994/Certify_Topk}.

View on arXiv
Comments on this paper