ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.07561
29
2

Constructing a provably adversarially-robust classifier from a high accuracy one

16 December 2019
Grzegorz Gluch
R. Urbanke
    AAML
ArXivPDFHTML
Abstract

Modern machine learning models with very high accuracy have been shown to be vulnerable to small, adversarially chosen perturbations of the input. Given black-box access to a high-accuracy classifier fff, we show how to construct a new classifier ggg that has high accuracy and is also robust to adversarial ℓ2\ell_2ℓ2​-bounded perturbations. Our algorithm builds upon the framework of \textit{randomized smoothing} that has been recently shown to outperform all previous defenses against ℓ2\ell_2ℓ2​-bounded adversaries. Using techniques like random partitions and doubling dimension, we are able to bound the adversarial error of ggg in terms of the optimum error. In this paper we focus on our conceptual contribution, but we do present two examples to illustrate our framework. We will argue that, under some assumptions, our bounds are optimal for these cases.

View on arXiv
Comments on this paper