Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2108.09135
Cited By
v1
v2 (latest)
PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier
20 August 2021
Chong Xiang
Saeed Mahloujifar
Prateek Mittal
VLM
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"
10 / 60 papers shown
Title
MagNet: a Two-Pronged Defense against Adversarial Examples
Dongyu Meng
Hao Chen
AAML
48
1,208
0
25 May 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
126
1,864
0
20 May 2017
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
Weilin Xu
David Evans
Yanjun Qi
AAML
85
1,269
0
04 Apr 2017
On Detecting Adversarial Perturbations
J. H. Metzen
Tim Genewein
Volker Fischer
Bastian Bischoff
AAML
61
950
0
14 Feb 2017
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
266
8,555
0
16 Aug 2016
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,020
0
10 Dec 2015
The Limitations of Deep Learning in Adversarial Settings
Nicolas Papernot
Patrick McDaniel
S. Jha
Matt Fredrikson
Z. Berkay Celik
A. Swami
AAML
112
3,962
0
24 Nov 2015
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Nicolas Papernot
Patrick McDaniel
Xi Wu
S. Jha
A. Swami
AAML
111
3,072
0
14 Nov 2015
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
277
19,066
0
20 Dec 2014
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
277
14,927
1
21 Dec 2013
Previous
1
2