ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.09135
  4. Cited By
PatchCleanser: Certifiably Robust Defense against Adversarial Patches
  for Any Image Classifier
v1v2 (latest)

PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier

20 August 2021
Chong Xiang
Saeed Mahloujifar
Prateek Mittal
    VLMAAML
ArXiv (abs)PDFHTML

Papers citing "PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"

10 / 60 papers shown
Title
MagNet: a Two-Pronged Defense against Adversarial Examples
MagNet: a Two-Pronged Defense against Adversarial Examples
Dongyu Meng
Hao Chen
AAML
48
1,208
0
25 May 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
126
1,864
0
20 May 2017
Feature Squeezing: Detecting Adversarial Examples in Deep Neural
  Networks
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
Weilin Xu
David Evans
Yanjun Qi
AAML
85
1,269
0
04 Apr 2017
On Detecting Adversarial Perturbations
On Detecting Adversarial Perturbations
J. H. Metzen
Tim Genewein
Volker Fischer
Bastian Bischoff
AAML
61
950
0
14 Feb 2017
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OODAAML
266
8,555
0
16 Aug 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,020
0
10 Dec 2015
The Limitations of Deep Learning in Adversarial Settings
The Limitations of Deep Learning in Adversarial Settings
Nicolas Papernot
Patrick McDaniel
S. Jha
Matt Fredrikson
Z. Berkay Celik
A. Swami
AAML
112
3,962
0
24 Nov 2015
Distillation as a Defense to Adversarial Perturbations against Deep
  Neural Networks
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Nicolas Papernot
Patrick McDaniel
Xi Wu
S. Jha
A. Swami
AAML
111
3,072
0
14 Nov 2015
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAMLGAN
277
19,066
0
20 Dec 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
277
14,927
1
21 Dec 2013
Previous
12