ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.00635
28
8

Robust Empirical Risk Minimization with Tolerance

2 October 2022
Robi Bhattacharjee
Max Hopkins
Akash Kumar
Hantao Yu
Kamalika Chaudhuri
    OOD
ArXivPDFHTML
Abstract

Developing simple, sample-efficient learning algorithms for robust classification is a pressing issue in today's tech-dominated world, and current theoretical techniques requiring exponential sample complexity and complicated improper learning rules fall far from answering the need. In this work we study the fundamental paradigm of (robust) empirical risk minimization\textit{empirical risk minimization}empirical risk minimization (RERM), a simple process in which the learner outputs any hypothesis minimizing its training error. RERM famously fails to robustly learn VC classes (Montasser et al., 2019a), a bound we show extends even to `nice' settings such as (bounded) halfspaces. As such, we study a recent relaxation of the robust model called tolerant\textit{tolerant}tolerant robust learning (Ashtiani et al., 2022) where the output classifier is compared to the best achievable error over slightly larger perturbation sets. We show that under geometric niceness conditions, a natural tolerant variant of RERM is indeed sufficient for γ\gammaγ-tolerant robust learning VC classes over Rd\mathbb{R}^dRd, and requires only O~(VC(H)dlog⁡Dγδϵ2)\tilde{O}\left( \frac{VC(H)d\log \frac{D}{\gamma\delta}}{\epsilon^2}\right)O~(ϵ2VC(H)dlogγδD​​) samples for robustness regions of (maximum) diameter DDD.

View on arXiv
Comments on this paper