Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.00420
Cited By
v1
v2
v3
v4 (latest)
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"
50 / 1,929 papers shown
Title
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch
G. Ding
Luyu Wang
Xiaomeng Jin
74
183
0
20 Feb 2019
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELM
AAML
147
905
0
18 Feb 2019
Mockingbird: Defending Against Deep-Learning-Based Website Fingerprinting Attacks with Adversarial Traces
Mohammad Saidur Rahman
Mohsen Imani
Nate Mathews
M. Wright
AAML
86
81
0
18 Feb 2019
AuxBlocks: Defense Adversarial Example via Auxiliary Blocks
Yueyao Yu
Pengfei Yu
Wenye Li
AAML
18
6
0
18 Feb 2019
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
Kevin Roth
Yannic Kilcher
Thomas Hofmann
AAML
80
176
0
13 Feb 2019
Model Compression with Adversarial Robustness: A Unified Optimization Framework
Shupeng Gui
Haotao Wang
Chen Yu
Haichuan Yang
Zhangyang Wang
Ji Liu
MQ
79
139
0
10 Feb 2019
When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks
Chao-Han Huck Yang
Yi-Chieh Liu
Pin-Yu Chen
Xiaoli Ma
Y. Tsai
BDL
AAML
CML
80
21
0
09 Feb 2019
Understanding the One-Pixel Attack: Propagation Maps and Locality Analysis
Danilo Vasconcellos Vargas
Jiawei Su
FAtt
AAML
41
38
0
08 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
219
2,057
0
08 Feb 2019
Fooling Neural Network Interpretations via Adversarial Model Manipulation
Juyeon Heo
Sunghwan Joo
Taesup Moon
AAML
FAtt
126
206
0
06 Feb 2019
Are All Layers Created Equal?
Chiyuan Zhang
Samy Bengio
Y. Singer
111
140
0
06 Feb 2019
Theoretical evidence for adversarial robustness through randomization
Rafael Pinot
Laurent Meunier
Alexandre Araujo
H. Kashima
Florian Yger
Cédric Gouy-Pailler
Jamal Atif
AAML
110
83
0
04 Feb 2019
Computational Limitations in Robust Classification and Win-Win Results
Akshay Degwekar
Preetum Nakkiran
Vinod Vaikuntanathan
67
39
0
04 Feb 2019
Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks
S. Saralajew
Lars Holdijk
Maike Rees
T. Villmann
OOD
49
19
0
01 Feb 2019
Robustness Certificates Against Adversarial Examples for ReLU Networks
Sahil Singla
Soheil Feizi
AAML
68
21
0
01 Feb 2019
Augmenting Model Robustness with Transformation-Invariant Attacks
Houpu Yao
Zhe Wang
Guangyu Nie
Yassine Mazboudi
Yezhou Yang
Yi Ren
AAML
OOD
31
3
0
31 Jan 2019
A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance
A. Shamir
Itay Safran
Eyal Ronen
O. Dunkelman
GAN
AAML
59
95
0
30 Jan 2019
Reliable Smart Road Signs
M. O. Sayin
Chung-Wei Lin
Eunsuk Kang
Shiníchi Shiraishi
Tamer Basar
AAML
23
0
0
30 Jan 2019
Adversarial Examples Are a Natural Consequence of Test Error in Noise
Nic Ford
Justin Gilmer
Nicholas Carlini
E. D. Cubuk
AAML
137
320
0
29 Jan 2019
Improving Adversarial Robustness of Ensembles with Diversity Training
Sanjay Kariyappa
Moinuddin K. Qureshi
AAML
FedML
88
138
0
28 Jan 2019
Defense Methods Against Adversarial Examples for Recurrent Neural Networks
Ishai Rosenberg
A. Shabtai
Yuval Elovici
Lior Rokach
AAML
GAN
81
42
0
28 Jan 2019
Improving Adversarial Robustness via Promoting Ensemble Diversity
Tianyu Pang
Kun Xu
Chao Du
Ning Chen
Jun Zhu
AAML
106
441
0
25 Jan 2019
Theoretically Principled Trade-off between Robustness and Accuracy
Hongyang R. Zhang
Yaodong Yu
Jiantao Jiao
Eric Xing
L. Ghaoui
Michael I. Jordan
254
2,566
0
24 Jan 2019
Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples
Kamil Nar
Orhan Ocal
S. Shankar Sastry
Kannan Ramchandran
AAML
90
54
0
24 Jan 2019
The Limitations of Adversarial Training and the Blind-Spot Attack
Huan Zhang
Hongge Chen
Zhao Song
Duane S. Boning
Inderjit S. Dhillon
Cho-Jui Hsieh
AAML
76
145
0
15 Jan 2019
ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System
Huangxun Chen
Chenyu Huang
Qianyi Huang
Qian Zhang
Wei Wang
AAML
75
28
0
12 Jan 2019
Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification
L. G. Hafemann
R. Sabourin
Luiz Eduardo Soares de Oliveira
AAML
55
44
0
10 Jan 2019
Image Super-Resolution as a Defense Against Adversarial Attacks
Aamir Mustafa
Salman H. Khan
Munawar Hayat
Jianbing Shen
Ling Shao
AAML
SupR
100
176
0
07 Jan 2019
Adversarial CAPTCHAs
Chenghui Shi
Xiaogang Xu
S. Ji
Kai Bu
Jianhai Chen
R. Beyah
Ting Wang
AAML
51
53
0
04 Jan 2019
Adversarial Robustness May Be at Odds With Simplicity
Preetum Nakkiran
AAML
109
108
0
02 Jan 2019
AIR5: Five Pillars of Artificial Intelligence Research
Yew-Soon Ong
Abhishek Gupta
74
29
0
30 Dec 2018
Adversarial Attack and Defense on Graph Data: A Survey
Lichao Sun
Yingtong Dou
Carl Yang
Ji Wang
Yixin Liu
Philip S. Yu
Lifang He
Yangqiu Song
GNN
AAML
139
286
0
26 Dec 2018
A Multiversion Programming Inspired Approach to Detecting Audio Adversarial Examples
Qiang Zeng
Jianhai Su
Chenglong Fu
Golam Kayas
Lannan Luo
AAML
55
46
0
26 Dec 2018
A Data-driven Adversarial Examples Recognition Framework via Adversarial Feature Genome
Li Chen
Qi Li
Jiawei Zhu
Jian Peng
Haifeng Li
AAML
59
3
0
25 Dec 2018
PPD: Permutation Phase Defense Against Adversarial Examples in Deep Learning
Mehdi Jafarnia-Jahromi
Tasmin Chowdhury
Hsin-Tai Wu
S. Mukherjee
AAML
47
4
0
25 Dec 2018
DUP-Net: Denoiser and Upsampler Network for 3D Adversarial Point Clouds Defense
Hang Zhou
Kejiang Chen
Weiming Zhang
Han Fang
Wenbo Zhou
Nenghai Yu
3DPC
69
8
0
25 Dec 2018
Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks
T. Brunner
Frederik Diehl
Michael Truong-Le
Alois Knoll
MLAU
AAML
77
117
0
24 Dec 2018
Spartan Networks: Self-Feature-Squeezing Neural Networks for increased robustness in adversarial settings
François Menet
Paul Berthier
José M. Fernandez
M. Gagnon
AAML
27
10
0
17 Dec 2018
Designing Adversarially Resilient Classifiers using Resilient Feature Engineering
Kevin Eykholt
A. Prakash
AAML
60
4
0
17 Dec 2018
Trust Region Based Adversarial Attack on Neural Networks
Z. Yao
A. Gholami
Peng Xu
Kurt Keutzer
Michael W. Mahoney
AAML
64
54
0
16 Dec 2018
Perturbation Analysis of Learning Algorithms: A Unifying Perspective on Generation of Adversarial Examples
E. Balda
Arash Behboodi
R. Mathar
AAML
30
5
0
15 Dec 2018
Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing
Jingyi Wang
Guoliang Dong
Jun Sun
Xinyu Wang
Peixin Zhang
AAML
78
191
0
14 Dec 2018
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
Matthias Hein
Maksym Andriushchenko
Julian Bitterwolf
OODD
187
559
0
13 Dec 2018
Adversarial Framing for Image and Video Classification
Konrad Zolna
Michal Zajac
Negar Rostamzadeh
Pedro H. O. Pinheiro
AAML
106
61
0
11 Dec 2018
On the Security of Randomized Defenses Against Adversarial Samples
K. Sharad
G. Marson
H. Truong
Ghassan O. Karame
AAML
47
1
0
11 Dec 2018
Defending Against Universal Perturbations With Shared Adversarial Training
Chaithanya Kumar Mummadi
Thomas Brox
J. H. Metzen
AAML
84
60
0
10 Dec 2018
Feature Denoising for Improving Adversarial Robustness
Cihang Xie
Yuxin Wu
Laurens van der Maaten
Alan Yuille
Kaiming He
172
916
0
09 Dec 2018
Fooling Network Interpretation in Image Classification
Akshayvarun Subramanya
Vipin Pillai
Hamed Pirsiavash
AAML
FAtt
49
7
0
06 Dec 2018
MMA Training: Direct Input Space Margin Maximization through Adversarial Training
G. Ding
Yash Sharma
Kry Yik-Chau Lui
Ruitong Huang
AAML
112
274
0
06 Dec 2018
The Limitations of Model Uncertainty in Adversarial Settings
Kathrin Grosse
David Pfaff
M. Smith
Michael Backes
AAML
63
34
0
06 Dec 2018
Previous
1
2
3
...
34
35
36
37
38
39
Next