ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.17209
74
13

Adversarial Examples in Random Neural Networks with General Activations

31 March 2022
Andrea Montanari
Yuchen Wu
    GAN
    AAML
ArXivPDFHTML
Abstract

A substantial body of empirical work documents the lack of robustness in deep learning models to adversarial examples. Recent theoretical work proved that adversarial examples are ubiquitous in two-layers networks with sub-exponential width and ReLU or smooth activations, and multi-layer ReLU networks with sub-exponential width. We present a result of the same type, with no restriction on width and for general locally Lipschitz continuous activations. More precisely, given a neural network f( ⋅ ;θ)f(\,\cdot\,;{\boldsymbol \theta})f(⋅;θ) with random weights θ{\boldsymbol \theta}θ, and feature vector x{\boldsymbol x}x, we show that an adversarial example x′{\boldsymbol x}'x′ can be found with high probability along the direction of the gradient ∇xf(x;θ)\nabla_{{\boldsymbol x}}f({\boldsymbol x};{\boldsymbol \theta})∇x​f(x;θ). Our proof is based on a Gaussian conditioning technique. Instead of proving that fff is approximately linear in a neighborhood of x{\boldsymbol x}x, we characterize the joint distribution of f(x;θ)f({\boldsymbol x};{\boldsymbol \theta})f(x;θ) and f(x′;θ)f({\boldsymbol x}';{\boldsymbol \theta})f(x′;θ) for x′=x−s(x)∇xf(x;θ){\boldsymbol x}' = {\boldsymbol x}-s({\boldsymbol x})\nabla_{{\boldsymbol x}}f({\boldsymbol x};{\boldsymbol \theta})x′=x−s(x)∇x​f(x;θ).

View on arXiv
Comments on this paper