ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.05850
6
2

Detecting Adversarial Patches with Class Conditional Reconstruction Networks

11 November 2020
Perry Deng
Mohammad Saidur Rahman
M. Wright
    AAML
ArXivPDFHTML
Abstract

Defending against physical adversarial attacks is a rapidly growing topic in deep learning and computer vision. Prominent forms of physical adversarial attacks, such as overlaid adversarial patches and objects, share similarities with digital attacks, but are easy for humans to notice. This leads us to explore the hypothesis that adversarial detection methods, which have been shown to be ineffective against adaptive digital adversarial examples, can be effective against these physical attacks. We use one such detection method based on autoencoder architectures, and perform adversarial patching experiments on MNIST, SVHN, and CIFAR10 against a CNN architecture and two CapsNet architectures. We also propose two modifications to the EM-Routed CapsNet architecture, Affine Voting and Matrix Capsule Dropout, to improve its classification performance. Our investigation shows that the detector retains some of its effectiveness even against adaptive adversarial patch attacks. In addition, detection performance tends to decrease among all the architectures with the increase of dataset complexity.

View on arXiv
Comments on this paper