ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.02632
38
9

Advocating for Multiple Defense Strategies against Adversarial Examples

4 December 2020
Alexandre Araujo
Laurent Meunier
Rafael Pinot
Benjamin Négrevergne
    AAML
ArXiv (abs)PDFHTML
Abstract

It has been empirically observed that defense mechanisms designed to protect neural networks against ℓ∞\ell_\inftyℓ∞​ adversarial examples offer poor performance against ℓ2\ell_2ℓ2​ adversarial examples and vice versa. In this paper we conduct a geometrical analysis that validates this observation. Then, we provide a number of empirical insights to illustrate the effect of this phenomenon in practice. Then, we review some of the existing defense mechanism that attempts to defend against multiple attacks by mixing defense strategies. Thanks to our numerical experiments, we discuss the relevance of this method and state open questions for the adversarial examples community.

View on arXiv
Comments on this paper