ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.04347
98
28
v1v2 (latest)

Gradient Methods Provably Converge to Non-Robust Networks

9 February 2022
Gal Vardi
Gilad Yehudai
Ohad Shamir
ArXiv (abs)PDFHTML
Abstract

Despite a great deal of research, it is still unclear why neural networks are so susceptible to adversarial examples. In this work, we identify natural settings where depth-222 ReLU networks trained with gradient flow are provably non-robust (susceptible to small adversarial ℓ2\ell_2ℓ2​-perturbations), even when robust networks that classify the training dataset correctly exist. Perhaps surprisingly, we show that the well-known implicit bias towards margin maximization induces bias towards non-robust networks, by proving that every network which satisfies the KKT conditions of the max-margin problem is non-robust.

View on arXiv
Comments on this paper