Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.09707
Cited By
Understanding and Enhancing the Transferability of Adversarial Examples
27 February 2018
Lei Wu
Zhanxing Zhu
Cheng Tai
E. Weinan
AAML
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Understanding and Enhancing the Transferability of Adversarial Examples"
23 / 23 papers shown
Title
Sensitivity and Generalization in Neural Networks: an Empirical Study
Roman Novak
Yasaman Bahri
Daniel A. Abolafia
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
AAML
93
439
0
23 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
185
3,180
0
01 Feb 2018
Spatially Transformed Adversarial Examples
Chaowei Xiao
Jun-Yan Zhu
Yue Liu
Warren He
M. Liu
D. Song
AAML
70
522
0
08 Jan 2018
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
Wieland Brendel
Jonas Rauber
Matthias Bethge
AAML
65
1,342
0
12 Dec 2017
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
A. Ross
Finale Doshi-Velez
AAML
147
680
0
26 Nov 2017
Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples
Yinpeng Dong
Hang Su
Jun Zhu
Fan Bao
AAML
116
129
0
18 Aug 2017
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Pin-Yu Chen
Huan Zhang
Yash Sharma
Jinfeng Yi
Cho-Jui Hsieh
AAML
71
1,875
0
14 Aug 2017
NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles
Jiajun Lu
Hussein Sibai
Evan Fabry
David A. Forsyth
AAML
78
280
0
12 Jul 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
265
12,029
0
19 Jun 2017
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAtt
ODL
199
2,221
0
12 Jun 2017
Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr
Alexey Kurakin
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
177
2,720
0
19 May 2017
The Space of Transferable Adversarial Examples
Florian Tramèr
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
SILM
79
556
0
11 Apr 2017
The Shattered Gradients Problem: If resnets are the answer, then what is the question?
David Balduzzi
Marcus Frean
Lennox Leary
J. P. Lewis
Kurt Wan-Duo Ma
Brian McWilliams
ODL
68
401
0
28 Feb 2017
Towards the Science of Security and Privacy in Machine Learning
Nicolas Papernot
Patrick McDaniel
Arunesh Sinha
Michael P. Wellman
AAML
74
473
0
11 Nov 2016
Delving into Transferable Adversarial Examples and Black-box Attacks
Yanpei Liu
Xinyun Chen
Chang-rui Liu
D. Song
AAML
133
1,731
0
08 Nov 2016
Entropy-SGD: Biasing Gradient Descent Into Wide Valleys
Pratik Chaudhari
A. Choromańska
Stefano Soatto
Yann LeCun
Carlo Baldassi
C. Borgs
J. Chayes
Levent Sagun
R. Zecchina
ODL
94
773
0
06 Nov 2016
Universal adversarial perturbations
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
127
2,525
0
26 Oct 2016
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
222
8,533
0
16 Aug 2016
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
517
5,885
0
08 Jul 2016
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
SILM
AAML
88
1,738
0
24 May 2016
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAU
AAML
64
3,672
0
08 Feb 2016
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
229
19,017
0
20 Dec 2014
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
235
14,893
1
21 Dec 2013
1