ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.12039
30
31

Reducing Adversarially Robust Learning to Non-Robust PAC Learning

22 October 2020
Omar Montasser
Steve Hanneke
Nathan Srebro
ArXivPDFHTML
Abstract

We study the problem of reducing adversarially robust learning to standard PAC learning, i.e. the complexity of learning adversarially robust predictors using access to only a black-box non-robust learner. We give a reduction that can robustly learn any hypothesis class C\mathcal{C}C using any non-robust learner A\mathcal{A}A for C\mathcal{C}C. The number of calls to A\mathcal{A}A depends logarithmically on the number of allowed adversarial perturbations per example, and we give a lower bound showing this is unavoidable.

View on arXiv
Comments on this paper