Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1706.06083
Cited By
Towards Deep Learning Models Resistant to Adversarial Attacks
19 June 2017
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards Deep Learning Models Resistant to Adversarial Attacks"
50 / 6,519 papers shown
Title
Image Decomposition and Classification through a Generative Model
Houpu Yao
Malcolm Regan
Yezhou Yang
Yi Ren
GAN
9
1
0
09 Feb 2019
Minimal Images in Deep Neural Networks: Fragile Object Recognition in Natural Images
S. Srivastava
Guy Ben-Yosef
Xavier Boix
AAML
30
27
0
08 Feb 2019
Discretization based Solutions for Secure Machine Learning against Adversarial Attacks
Priyadarshini Panda
I. Chakraborty
Kaushik Roy
AAML
25
40
0
08 Feb 2019
Understanding the One-Pixel Attack: Propagation Maps and Locality Analysis
Danilo Vasconcellos Vargas
Jiawei Su
FAtt
AAML
11
36
0
08 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
22
1,998
0
08 Feb 2019
Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples
Derui Wang
Chaoran Li
S. Wen
Qing-Long Han
Surya Nepal
Xiangyu Zhang
Yang Xiang
AAML
30
40
0
06 Feb 2019
Fooling Neural Network Interpretations via Adversarial Model Manipulation
Juyeon Heo
Sunghwan Joo
Taesup Moon
AAML
FAtt
27
201
0
06 Feb 2019
Are All Layers Created Equal?
Chiyuan Zhang
Samy Bengio
Y. Singer
20
140
0
06 Feb 2019
Analyzing and Improving Representations with the Soft Nearest Neighbor Loss
Nicholas Frosst
Nicolas Papernot
Geoffrey E. Hinton
17
157
0
05 Feb 2019
Theoretical evidence for adversarial robustness through randomization
Rafael Pinot
Laurent Meunier
Alexandre Araujo
H. Kashima
Florian Yger
Cédric Gouy-Pailler
Jamal Atif
AAML
47
82
0
04 Feb 2019
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks
Alberto Marchisio
Giorgio Nanfa
Faiq Khalid
Muhammad Abdullah Hanif
Maurizio Martina
Mohamed Bennai
AAML
13
7
0
04 Feb 2019
Collaborative Sampling in Generative Adversarial Networks
Yuejiang Liu
Parth Kothari
Alexandre Alahi
TTA
30
16
0
02 Feb 2019
What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization?
Chi Jin
Praneeth Netrapalli
Michael I. Jordan
21
82
0
02 Feb 2019
Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks
S. Saralajew
Lars Holdijk
Maike Rees
T. Villmann
OOD
25
19
0
01 Feb 2019
The Efficacy of SHIELD under Different Threat Models
Cory Cornelius
Nilaksh Das
Shang-Tse Chen
Li Chen
Michael E. Kounavis
Duen Horng Chau
AAML
18
11
0
01 Feb 2019
Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation
Sahil Singla
Eric Wallace
Shi Feng
S. Feizi
FAtt
34
59
0
01 Feb 2019
Robustness Certificates Against Adversarial Examples for ReLU Networks
Sahil Singla
S. Feizi
AAML
25
21
0
01 Feb 2019
Natural and Adversarial Error Detection using Invariance to Image Transformations
Yuval Bahat
Michal Irani
Gregory Shakhnarovich
AAML
9
18
0
01 Feb 2019
A New Family of Neural Networks Provably Resistant to Adversarial Attacks
Rakshit Agrawal
Luca de Alfaro
D. Helmbold
AAML
OOD
27
2
0
01 Feb 2019
Augmenting Model Robustness with Transformation-Invariant Attacks
Houpu Yao
Zhe Wang
Guangyu Nie
Yassine Mazboudi
Yezhou Yang
Yi Ren
AAML
OOD
11
3
0
31 Jan 2019
HyperGAN: A Generative Model for Diverse, Performant Neural Networks
Neale Ratzlaff
Fuxin Li
20
63
0
30 Jan 2019
A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance
A. Shamir
Itay Safran
Eyal Ronen
O. Dunkelman
GAN
AAML
11
94
0
30 Jan 2019
Reliable Smart Road Signs
M. O. Sayin
Chung-Wei Lin
Eunsuk Kang
Shiníchi Shiraishi
Tamer Basar
AAML
11
0
0
30 Jan 2019
Adversarial Examples Are a Natural Consequence of Test Error in Noise
Nic Ford
Justin Gilmer
Nicholas Carlini
E. D. Cubuk
AAML
36
318
0
29 Jan 2019
On the Effect of Low-Rank Weights on Adversarial Robustness of Neural Networks
P. Langenberg
E. Balda
Arash Behboodi
R. Mathar
11
16
0
29 Jan 2019
Improving Adversarial Robustness of Ensembles with Diversity Training
Sanjay Kariyappa
Moinuddin K. Qureshi
AAML
FedML
17
133
0
28 Jan 2019
Defense Methods Against Adversarial Examples for Recurrent Neural Networks
Ishai Rosenberg
A. Shabtai
Yuval Elovici
Lior Rokach
AAML
GAN
32
42
0
28 Jan 2019
Using Pre-Training Can Improve Model Robustness and Uncertainty
Dan Hendrycks
Kimin Lee
Mantas Mazeika
NoLa
34
721
0
28 Jan 2019
Characterizing the Shape of Activation Space in Deep Neural Networks
Thomas Gebhart
Paul Schrater
Alan Hylton
AAML
11
7
0
28 Jan 2019
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
39
449
0
27 Jan 2019
Improving Adversarial Robustness via Promoting Ensemble Diversity
Tianyu Pang
Kun Xu
Chao Du
Ning Chen
Jun Zhu
AAML
41
434
0
25 Jan 2019
Theoretically Principled Trade-off between Robustness and Accuracy
Hongyang R. Zhang
Yaodong Yu
Jiantao Jiao
Eric Xing
L. Ghaoui
Michael I. Jordan
69
2,500
0
24 Jan 2019
Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples
Kamil Nar
Orhan Ocal
S. Shankar Sastry
Kannan Ramchandran
AAML
27
54
0
24 Jan 2019
A review of domain adaptation without target labels
Wouter M. Kouw
Marco Loog
OOD
VLM
15
478
0
16 Jan 2019
Optimization Problems for Machine Learning: A Survey
Claudio Gambella
Bissan Ghaddar
Joe Naoum-Sawaya
AI4CE
37
178
0
16 Jan 2019
The Limitations of Adversarial Training and the Blind-Spot Attack
Huan Zhang
Hongge Chen
Zhao Song
Duane S. Boning
Inderjit S. Dhillon
Cho-Jui Hsieh
AAML
22
144
0
15 Jan 2019
Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification
L. G. Hafemann
R. Sabourin
Luiz Eduardo Soares de Oliveira
AAML
19
42
0
10 Jan 2019
Image Transformation can make Neural Networks more robust against Adversarial Examples
D. D. Thang
Toshihiro Matsui
AAML
11
10
0
10 Jan 2019
Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud Classifiers
Daniel Liu
Ronald Yu
Hao Su
3DPC
34
165
0
10 Jan 2019
Image Super-Resolution as a Defense Against Adversarial Attacks
Aamir Mustafa
Salman H. Khan
Munawar Hayat
Jianbing Shen
Ling Shao
AAML
SupR
27
168
0
07 Jan 2019
Adversarial CAPTCHAs
Chenghui Shi
Xiaogang Xu
S. Ji
Kai Bu
Jianhai Chen
R. Beyah
Ting Wang
AAML
22
52
0
04 Jan 2019
Adversarial Robustness May Be at Odds With Simplicity
Preetum Nakkiran
AAML
14
105
0
02 Jan 2019
Adversarial Attack and Defense on Graph Data: A Survey
Lichao Sun
Yingtong Dou
Carl Yang
Ji Wang
Yixin Liu
Philip S. Yu
Lifang He
Yangqiu Song
GNN
AAML
23
275
0
26 Dec 2018
Towards a Theoretical Understanding of Hashing-Based Neural Nets
Yibo Lin
Zhao Song
Lin F. Yang
22
5
0
26 Dec 2018
PPD: Permutation Phase Defense Against Adversarial Examples in Deep Learning
Mehdi Jafarnia-Jahromi
Tasmin Chowdhury
Hsin-Tai Wu
S. Mukherjee
AAML
27
4
0
25 Dec 2018
Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks
T. Brunner
Frederik Diehl
Michael Truong-Le
Alois Knoll
MLAU
AAML
17
116
0
24 Dec 2018
Increasing the adversarial robustness and explainability of capsule networks with
γ
γ
γ
-capsules
David Peer
Sebastian Stabinger
A. Rodríguez-Sánchez
AAML
GAN
MedIm
39
11
0
23 Dec 2018
Exploiting the Inherent Limitation of L0 Adversarial Examples
F. Zuo
Bokai Yang
Xiaopeng Li
Lannan Luo
Qiang Zeng
AAML
29
1
0
23 Dec 2018
Towards resilient machine learning for ransomware detection
Li-Wei Chen
Chih-Yuan Yang
Anindya Paul
R. Sahita
AAML
14
22
0
21 Dec 2018
Enhancing Robustness of Deep Neural Networks Against Adversarial Malware Samples: Principles, Framework, and AICS'2019 Challenge
Deqiang Li
Qianmu Li
Yanfang Ye
Shouhuai Xu
AAML
19
15
0
19 Dec 2018
Previous
1
2
3
...
124
125
126
...
129
130
131
Next