Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1711.09404
Cited By
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
26 November 2017
A. Ross
Finale Doshi-Velez
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients"
9 / 109 papers shown
Title
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Ambra Demontis
Marco Melis
Maura Pintor
Matthew Jagielski
Battista Biggio
Alina Oprea
Cristina Nita-Rotaru
Fabio Roli
SILM
AAML
11
11
0
08 Sep 2018
Security Consideration For Deep Learning-Based Image Forensics
Wei-Ye Zhao
Pengpeng Yang
R. Ni
Yao-Min Zhao
Haorui Wu
AAML
6
5
0
29 Mar 2018
Understanding and Enhancing the Transferability of Adversarial Examples
Lei Wu
Zhanxing Zhu
Cheng Tai
E. Weinan
AAML
SILM
28
96
0
27 Feb 2018
Deep Defense: Training DNNs with Improved Adversarial Robustness
Ziang Yan
Yiwen Guo
Changshui Zhang
AAML
33
109
0
23 Feb 2018
L2-Nonexpansive Neural Networks
Haifeng Qian
M. Wegman
17
74
0
22 Feb 2018
Gradient Regularization Improves Accuracy of Discriminative Models
D. Varga
Adrián Csiszárik
Zsolt Zombori
18
53
0
28 Dec 2017
Interpretation of Neural Networks is Fragile
Amirata Ghorbani
Abubakar Abid
James Zou
FAtt
AAML
48
856
0
29 Oct 2017
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
264
3,110
0
04 Nov 2016
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
287
5,837
0
08 Jul 2016
Previous
1
2
3