Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.00420
Cited By
v1
v2
v3
v4 (latest)
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"
50 / 1,929 papers shown
Title
Built-in Vulnerabilities to Imperceptible Adversarial Perturbations
T. Tanay
Jerone T. A. Andrews
Lewis D. Griffin
73
7
0
19 Jun 2018
Non-Negative Networks Against Adversarial Attacks
William Fleshman
Edward Raff
Jared Sylvester
Steven Forsyth
Mark McLean
AAML
66
41
0
15 Jun 2018
Hardware Trojan Attacks on Neural Networks
Joseph Clements
Yingjie Lao
AAML
78
89
0
14 Jun 2018
Manifold Mixup: Better Representations by Interpolating Hidden States
Vikas Verma
Alex Lamb
Christopher Beckham
Amir Najafi
Ioannis Mitliagkas
Aaron Courville
David Lopez-Paz
Yoshua Bengio
AAML
DRL
111
35
0
13 Jun 2018
Monge blunts Bayes: Hardness Results for Adversarial Training
Zac Cranko
A. Menon
Richard Nock
Cheng Soon Ong
Zhan Shi
Christian J. Walder
AAML
73
17
0
08 Jun 2018
Revisiting Adversarial Risk
A. Suggala
Adarsh Prasad
Vaishnavh Nagarajan
Pradeep Ravikumar
AAML
60
20
0
07 Jun 2018
Killing four birds with one Gaussian process: the relation between different test-time attacks
Kathrin Grosse
M. Smith
Michael Backes
AAML
47
2
0
06 Jun 2018
Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders
Partha Ghosh
Arpan Losalka
Michael J. Black
AAML
74
78
0
31 May 2018
Scaling provable adversarial defenses
Eric Wong
Frank R. Schmidt
J. H. Metzen
J. Zico Kolter
AAML
105
450
0
31 May 2018
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
Kang Liu
Brendan Dolan-Gavitt
S. Garg
AAML
100
1,049
0
30 May 2018
Robustness May Be at Odds with Accuracy
Dimitris Tsipras
Shibani Santurkar
Logan Engstrom
Alexander Turner
Aleksander Madry
AAML
116
1,786
0
30 May 2018
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
Chun-Chen Tu
Pai-Shun Ting
Pin-Yu Chen
Sijia Liu
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
Shin-Ming Cheng
MLAU
AAML
94
399
0
30 May 2018
Adversarial Examples in Remote Sensing
W. Czaja
Neil Fendley
M. Pekala
Christopher R. Ratto
I-J. Wang
AAML
49
68
0
28 May 2018
GenAttack: Practical Black-box Attacks with Gradient-Free Optimization
M. Alzantot
Yash Sharma
Supriyo Chakraborty
Huan Zhang
Cho-Jui Hsieh
Mani B. Srivastava
AAML
103
258
0
28 May 2018
Training verified learners with learned verifiers
Krishnamurthy Dvijotham
Sven Gowal
Robert Stanforth
Relja Arandjelović
Brendan O'Donoghue
J. Uesato
Pushmeet Kohli
OOD
111
170
0
25 May 2018
Adversarial examples from computational constraints
Sébastien Bubeck
Eric Price
Ilya P. Razenshteyn
AAML
151
233
0
25 May 2018
Towards the first adversarially robust neural network model on MNIST
Lukas Schott
Jonas Rauber
Matthias Bethge
Wieland Brendel
AAML
OOD
85
370
0
23 May 2018
Constructing Unrestricted Adversarial Examples with Generative Models
Yang Song
Rui Shu
Nate Kushman
Stefano Ermon
GAN
AAML
218
307
0
21 May 2018
Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference
Ruying Bao
Sihang Liang
Qingcan Wang
GAN
AAML
71
13
0
21 May 2018
Towards Understanding Limitations of Pixel Discretization Against Adversarial Attacks
Jiefeng Chen
Xi Wu
Vaibhav Rastogi
Yingyu Liang
S. Jha
AAML
79
22
0
20 May 2018
Gradient-Leaks: Understanding and Controlling Deanonymization in Federated Learning
Tribhuvanesh Orekondy
Seong Joon Oh
Yang Zhang
Bernt Schiele
Mario Fritz
PICV
FedML
438
37
0
15 May 2018
Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing
Jingyi Wang
Jun Sun
Peixin Zhang
Xinyu Wang
AAML
76
41
0
14 May 2018
Curriculum Adversarial Training
Qi-Zhi Cai
Min Du
Chang-rui Liu
Basel Alomair
AAML
91
165
0
13 May 2018
Breaking Transferability of Adversarial Samples with Randomness
Yan Zhou
Murat Kantarcioglu
B. Xi
AAML
52
12
0
11 May 2018
Deep Nets: What have they ever done for Vision?
Alan Yuille
Chenxi Liu
224
104
0
10 May 2018
Verisimilar Percept Sequences Tests for Autonomous Driving Intelligent Agent Assessment
Thomio Watanabe
D. Wolf
27
8
0
07 May 2018
PRADA: Protecting against DNN Model Stealing Attacks
Mika Juuti
S. Szyller
Samuel Marchal
Nadarajah Asokan
SILM
AAML
107
445
0
07 May 2018
Adversarially Robust Generalization Requires More Data
Ludwig Schmidt
Shibani Santurkar
Dimitris Tsipras
Kunal Talwar
Aleksander Madry
OOD
AAML
205
797
0
30 Apr 2018
VectorDefense: Vectorization as a Defense to Adversarial Examples
V. Kabilan
Brandon L. Morris
Anh Totti Nguyen
AAML
66
21
0
23 Apr 2018
ADef: an Iterative Algorithm to Construct Adversarial Deformations
Rima Alaifari
Giovanni S. Alberti
Tandri Gauksson
AAML
94
97
0
20 Apr 2018
ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector
Shang-Tse Chen
Cory Cornelius
Jason Martin
Duen Horng Chau
ObjD
213
429
0
16 Apr 2018
Adversarial Attacks Against Medical Deep Learning Systems
S. G. Finlayson
Hyung Won Chung
I. Kohane
Andrew L. Beam
SILM
AAML
OOD
MedIm
85
232
0
15 Apr 2018
On the Limitation of MagNet Defense against
L
1
L_1
L
1
-based Adversarial Examples
Pei-Hsuan Lu
Pin-Yu Chen
Kang-Cheng Chen
Chia-Mu Yu
AAML
114
19
0
14 Apr 2018
On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses
Anish Athalye
Nicholas Carlini
AAML
79
170
0
10 Apr 2018
Adversarial Training Versus Weight Decay
A. Galloway
T. Tanay
Graham W. Taylor
AAML
70
23
0
10 Apr 2018
An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks
Pu Zhao
Sijia Liu
Yanzhi Wang
Xinyu Lin
AAML
72
37
0
09 Apr 2018
Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations
Alex Lamb
Jonathan Binas
Anirudh Goyal
Dmitriy Serdyuk
Sandeep Subramanian
Ioannis Mitliagkas
Yoshua Bengio
OOD
94
43
0
07 Apr 2018
Unifying Bilateral Filtering and Adversarial Training for Robust Neural Networks
Neale Ratzlaff
Fuxin Li
AAML
FedML
35
1
0
05 Apr 2018
Defending against Adversarial Images using Basis Functions Transformations
Uri Shaham
J. Garritano
Yutaro Yamada
Ethan Weinberger
A. Cloninger
Xiuyuan Cheng
Kelly P. Stanton
Y. Kluger
AAML
69
57
0
28 Mar 2018
On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples
Pei-Hsuan Lu
Pin-Yu Chen
Chia-Mu Yu
AAML
66
26
0
26 Mar 2018
Adversarial Defense based on Structure-to-Signal Autoencoders
Joachim Folz
Sebastián M. Palacio
Jörn Hees
Damian Borth
Andreas Dengel
AAML
71
32
0
21 Mar 2018
DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems
Lei Ma
Felix Juefei Xu
Fuyuan Zhang
Jiyuan Sun
Minhui Xue
...
Ting Su
Li Li
Yang Liu
Jianjun Zhao
Yadong Wang
ELM
80
626
0
20 Mar 2018
A Dual Approach to Scalable Verification of Deep Networks
Krishnamurthy Dvijotham
Dvijotham
Robert Stanforth
Sven Gowal
Timothy A. Mann
Pushmeet Kohli
70
399
0
17 Mar 2018
Adversarial Logit Pairing
Harini Kannan
Alexey Kurakin
Ian Goodfellow
AAML
103
629
0
16 Mar 2018
Semantic Adversarial Examples
Hossein Hosseini
Radha Poovendran
GAN
AAML
108
199
0
16 Mar 2018
Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples
Zihao Liu
Qi Liu
Tao Liu
Nuo Xu
Xue Lin
Yanzhi Wang
Wujie Wen
AAML
MQ
85
265
0
14 Mar 2018
Invisible Mask: Practical Attacks on Face Recognition with Infrared
Zhe Zhou
Di Tang
Xiaofeng Wang
Weili Han
Xiangyu Liu
Kehuan Zhang
CVBM
AAML
68
103
0
13 Mar 2018
Detecting Adversarial Examples via Neural Fingerprinting
Sumanth Dathathri
Stephan Zheng
Tianwei Yin
Richard M. Murray
Yisong Yue
MLAU
AAML
48
0
0
11 Mar 2018
Style Memory: Making a Classifier Network Generative
R. Wiyatno
Jeff Orchard
70
4
0
05 Mar 2018
Understanding and Enhancing the Transferability of Adversarial Examples
Lei Wu
Zhanxing Zhu
Cheng Tai
E. Weinan
AAML
SILM
80
99
0
27 Feb 2018
Previous
1
2
3
...
37
38
39
Next