Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.00420
Cited By
v1
v2
v3
v4 (latest)
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"
29 / 1,929 papers shown
Title
Robust GANs against Dishonest Adversaries
Zhi Xu
Chengtao Li
Stefanie Jegelka
AAML
80
3
0
27 Feb 2018
Hessian-based Analysis of Large Batch Training and Robustness to Adversaries
Z. Yao
A. Gholami
Qi Lei
Kurt Keutzer
Michael W. Mahoney
100
167
0
22 Feb 2018
L2-Nonexpansive Neural Networks
Haifeng Qian
M. Wegman
75
74
0
22 Feb 2018
Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression
Nilaksh Das
Madhuri Shanbhogue
Shang-Tse Chen
Fred Hohman
Siwei Li
Li-Wei Chen
Michael E. Kounavis
Duen Horng Chau
FedML
AAML
85
228
0
19 Feb 2018
Divide, Denoise, and Defend against Adversarial Attacks
Seyed-Mohsen Moosavi-Dezfooli
A. Shrivastava
Oncel Tuzel
AAML
57
45
0
19 Feb 2018
Are Generative Classifiers More Robust to Adversarial Attacks?
Yingzhen Li
John Bradshaw
Yash Sharma
AAML
102
79
0
19 Feb 2018
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
J. Uesato
Brendan O'Donoghue
Aaron van den Oord
Pushmeet Kohli
AAML
182
606
0
15 Feb 2018
Fooling OCR Systems with Adversarial Text Images
Congzheng Song
Vitaly Shmatikov
AAML
61
51
0
15 Feb 2018
Predicting Adversarial Examples with High Confidence
A. Galloway
Graham W. Taylor
M. Moussa
AAML
56
9
0
13 Feb 2018
Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks
Yusuke Tsuzuku
Issei Sato
Masashi Sugiyama
AAML
117
309
0
12 Feb 2018
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
SILM
AAML
131
940
0
09 Feb 2018
Blind Pre-Processing: A Robust Defense Method Against Adversarial Examples
Adnan Siraj Rakin
Zhezhi He
Boqing Gong
Deliang Fan
AAML
87
4
0
05 Feb 2018
First-order Adversarial Vulnerability of Neural Networks and Input Dimension
Carl-Johann Simon-Gabriel
Yann Ollivier
Léon Bottou
Bernhard Schölkopf
David Lopez-Paz
AAML
111
48
0
05 Feb 2018
Secure Detection of Image Manipulation by means of Random Feature Selection
Zhongfu Chen
B. Tondi
Xiaolong Li
R. Ni
Yao-Min Zhao
Mauro Barni
AAML
80
34
0
02 Feb 2018
Adversarial Spheres
Justin Gilmer
Luke Metz
Fartash Faghri
S. Schoenholz
M. Raghu
Martin Wattenberg
Ian Goodfellow
AAML
74
7
0
09 Jan 2018
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Nicholas Carlini
D. Wagner
AAML
101
1,083
0
05 Jan 2018
A General Framework for Adversarial Examples with Objectives
Mahmood Sharif
Sruti Bhagavatula
Lujo Bauer
Michael K. Reiter
AAML
GAN
84
196
0
31 Dec 2017
The Robust Manifold Defense: Adversarial Training using Generative Models
A. Jalal
Andrew Ilyas
C. Daskalakis
A. Dimakis
AAML
109
174
0
26 Dec 2017
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
181
1,411
0
08 Dec 2017
Generative Adversarial Perturbations
Omid Poursaeed
Isay Katsman
Bicheng Gao
Serge J. Belongie
AAML
GAN
WIGM
88
356
0
06 Dec 2017
Towards Robust Neural Networks via Random Self-ensemble
Xuanqing Liu
Minhao Cheng
Huan Zhang
Cho-Jui Hsieh
FedML
AAML
108
424
0
02 Dec 2017
On the Robustness of Semantic Segmentation Models to Adversarial Attacks
Anurag Arnab
O. Mikšík
Philip Torr
AAML
115
308
0
27 Nov 2017
Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training
Xi Wu
Uyeong Jang
Jiefeng Chen
Lingjiao Chen
S. Jha
AAML
94
21
0
21 Nov 2017
Evaluating Robustness of Neural Networks with Mixed Integer Programming
Vincent Tjeng
Kai Y. Xiao
Russ Tedrake
AAML
111
117
0
20 Nov 2017
Adversarial Attacks Beyond the Image Space
Fangyin Wei
Chenxi Liu
Yu-Siang Wang
Weichao Qiu
Lingxi Xie
Yu-Wing Tai
Chi-Keung Tang
Alan Yuille
AAML
126
150
0
20 Nov 2017
Provable defenses against adversarial examples via the convex outer adversarial polytope
Eric Wong
J. Zico Kolter
AAML
204
1,506
0
02 Nov 2017
Provably Minimally-Distorted Adversarial Examples
Nicholas Carlini
Guy Katz
Clark W. Barrett
D. Dill
AAML
105
89
0
29 Sep 2017
Improving Robustness of ML Classifiers against Realizable Evasion Attacks Using Conserved Features
Liang Tong
Yue Liu
Chen Hajaj
Chaowei Xiao
Ning Zhang
Yevgeniy Vorobeychik
AAML
OOD
52
88
0
28 Aug 2017
Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr
Alexey Kurakin
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
217
2,738
0
19 May 2017
Previous
1
2
3
...
37
38
39