Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1706.06083
Cited By
Towards Deep Learning Models Resistant to Adversarial Attacks
19 June 2017
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards Deep Learning Models Resistant to Adversarial Attacks"
50 / 6,519 papers shown
Title
Non-Determinism in Neural Networks for Adversarial Robustness
Daanish Ali Khan
Linhong Li
Ninghao Sha
Zhuoran Liu
Abelino Jiménez
Bhiksha Raj
Rita Singh
OOD
AAML
19
3
0
26 May 2019
Robust Classification using Robust Feature Augmentation
Kevin Eykholt
Swati Gupta
Atul Prakash
Amir Rahmati
Pratik Vaishnavi
Haizhong Zheng
AAML
19
2
0
26 May 2019
Rearchitecting Classification Frameworks For Increased Robustness
Varun Chandrasekaran
Brian Tang
Nicolas Papernot
Kassem Fawaz
S. Jha
Xi Wu
AAML
OOD
42
8
0
26 May 2019
Generalizable Adversarial Attacks with Latent Variable Perturbation Modelling
A. Bose
Andre Cianflone
William L. Hamilton
OOD
AAML
22
7
0
26 May 2019
State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations
Alex Lamb
Jonathan Binas
Anirudh Goyal
Sandeep Subramanian
Ioannis Mitliagkas
Denis Kazakov
Yoshua Bengio
Michael C. Mozer
OOD
19
3
0
26 May 2019
Purifying Adversarial Perturbation with Adversarially Trained Auto-encoders
Hebi Li
Qi Xiao
Shixin Tian
Jin Tian
AAML
27
4
0
26 May 2019
Adversarial Distillation for Ordered Top-k Attacks
Zekun Zhang
Tianfu Wu
AAML
14
2
0
25 May 2019
Enhancing Adversarial Defense by k-Winners-Take-All
Chang Xiao
Peilin Zhong
Changxi Zheng
AAML
24
97
0
25 May 2019
Regula Sub-rosa: Latent Backdoor Attacks on Deep Neural Networks
Yuanshun Yao
Huiying Li
Haitao Zheng
Ben Y. Zhao
AAML
35
13
0
24 May 2019
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILM
MIACV
AAML
6
235
0
24 May 2019
Robustness to Adversarial Perturbations in Learning from Incomplete Data
Amir Najafi
S. Maeda
Masanori Koyama
Takeru Miyato
OOD
32
129
0
24 May 2019
Robust Attribution Regularization
Jiefeng Chen
Xi Wu
Vaibhav Rastogi
Yingyu Liang
S. Jha
OOD
17
83
0
23 May 2019
Thwarting finite difference adversarial attacks with output randomization
Haidar Khan
Daniel Park
Azer Khan
B. Yener
SILM
AAML
41
0
0
23 May 2019
Interpreting Adversarially Trained Convolutional Neural Networks
Tianyuan Zhang
Zhanxing Zhu
AAML
GAN
FAtt
28
158
0
23 May 2019
Adversarially Robust Distillation
Micah Goldblum
Liam H. Fowl
S. Feizi
Tom Goldstein
AAML
15
201
0
23 May 2019
A Direct Approach to Robust Deep Learning Using Adversarial Networks
Huaxia Wang
Chun-Nam Yu
GAN
AAML
OOD
24
77
0
23 May 2019
Convergence and Margin of Adversarial Training on Separable Data
Zachary B. Charles
Shashank Rajput
S. Wright
Dimitris Papailiopoulos
AAML
34
16
0
22 May 2019
Adversarially robust transfer learning
Ali Shafahi
Parsa Saadatpanah
Chen Zhu
Amin Ghiasi
Christoph Studer
David Jacobs
Tom Goldstein
OOD
15
114
0
20 May 2019
CERTIFAI: Counterfactual Explanations for Robustness, Transparency, Interpretability, and Fairness of Artificial Intelligence models
Shubham Sharma
Jette Henderson
Joydeep Ghosh
11
87
0
20 May 2019
Taking Care of The Discretization Problem: A Comprehensive Study of the Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer Domain
Lei Bu
Yuchao Duan
Fu Song
Zhe Zhao
AAML
37
18
0
19 May 2019
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
19
5
0
19 May 2019
Percival: Making In-Browser Perceptual Ad Blocking Practical With Deep Learning
Z. Din
P. Tigas
Samuel T. King
B. Livshits
VLM
39
29
0
17 May 2019
Simple Black-box Adversarial Attacks
Chuan Guo
Jacob R. Gardner
Yurong You
A. Wilson
Kilian Q. Weinberger
AAML
28
568
0
17 May 2019
A critique of the DeepSec Platform for Security Analysis of Deep Learning Models
Nicholas Carlini
ELM
25
14
0
17 May 2019
Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization
Seungyong Moon
Gaon An
Hyun Oh Song
AAML
MLAU
25
133
0
16 May 2019
On Norm-Agnostic Robustness of Adversarial Training
Bai Li
Changyou Chen
Wenlin Wang
Lawrence Carin
OOD
SILM
16
7
0
15 May 2019
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Chen Zhu
Wenjie Huang
Ali Shafahi
Hengduo Li
Gavin Taylor
Christoph Studer
Tom Goldstein
38
283
0
15 May 2019
Adversarial Examples for Electrocardiograms
Xintian Han
Yuxuan Hu
L. Foschini
L. Chinitz
Lior Jankelson
Rajesh Ranganath
AAML
MedIm
11
4
0
13 May 2019
Harnessing the Vulnerability of Latent Layers in Adversarially Trained Models
M. Singh
Abhishek Sinha
Nupur Kumari
Harshitha Machiraju
Balaji Krishnamurthy
V. Balasubramanian
AAML
19
61
0
13 May 2019
Analyzing Adversarial Attacks Against Deep Learning for Intrusion Detection in IoT Networks
Olakunle Ibitoye
Omair Shafiq
Ashraf Matrawy
22
162
0
13 May 2019
CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features
Sangdoo Yun
Dongyoon Han
Seong Joon Oh
Sanghyuk Chun
Junsuk Choe
Y. Yoo
OOD
406
4,694
0
13 May 2019
Moving Target Defense for Deep Visual Sensing against Adversarial Examples
Qun Song
Zhenyu Yan
Rui Tan
AAML
21
20
0
11 May 2019
Interpreting and Evaluating Neural Network Robustness
Fuxun Yu
Zhuwei Qin
Chenchen Liu
Liang Zhao
Yanzhi Wang
Xiang Chen
AAML
15
55
0
10 May 2019
On the Connection Between Adversarial Robustness and Saliency Map Interpretability
Christian Etmann
Sebastian Lunz
Peter Maass
Carola-Bibiane Schönlieb
AAML
FAtt
31
157
0
10 May 2019
Exploring the Hyperparameter Landscape of Adversarial Robustness
Evelyn Duesterwald
Anupama Murthi
Ganesh Venkataraman
M. Sinn
Deepak Vijaykeerthy
AAML
16
7
0
09 May 2019
Learning Interpretable Features via Adversarially Robust Optimization
Ashkan Khakzar
Shadi Albarqouni
Nassir Navab
MedIm
FAtt
17
14
0
09 May 2019
Adversarial Defense Framework for Graph Neural Network
Shen Wang
Zhengzhang Chen
Jingchao Ni
Xiao Yu
Zhichun Li
Haifeng Chen
Philip S. Yu
AAML
GNN
25
28
0
09 May 2019
ROSA: Robust Salient Object Detection against Adversarial Attacks
Haofeng Li
Guanbin Li
Yizhou Yu
AAML
16
28
0
09 May 2019
Does Data Augmentation Lead to Positive Margin?
Shashank Rajput
Zhili Feng
Zachary B. Charles
Po-Ling Loh
Dimitris Papailiopoulos
21
37
0
08 May 2019
An Empirical Evaluation of Adversarial Robustness under Transfer Learning
Todor Davchev
Timos Korres
Stathi Fotiadis
N. Antonopoulos
S. Ramamoorthy
AAML
36
0
0
07 May 2019
Adaptive Generation of Unrestricted Adversarial Inputs
Isaac Dunn
Hadrien Pouget
T. Melham
Daniel Kroening
AAML
28
7
0
07 May 2019
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
57
1,810
0
06 May 2019
Batch Normalization is a Cause of Adversarial Vulnerability
A. Galloway
A. Golubeva
T. Tanay
M. Moussa
Graham W. Taylor
ODL
AAML
25
80
0
06 May 2019
Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples
Vikash Sehwag
A. Bhagoji
Liwei Song
Chawin Sitawarin
Daniel Cullina
M. Chiang
Prateek Mittal
OODD
32
26
0
05 May 2019
CharBot: A Simple and Effective Method for Evading DGA Classifiers
Jonathan Peck
Claire Nie
R. Sivaguru
Charles Grumer
Femi G. Olumofin
Bin Yu
A. Nascimento
Martine De Cock
AAML
11
43
0
03 May 2019
Transfer of Adversarial Robustness Between Perturbation Types
Daniel Kang
Yi Sun
Tom B. Brown
Dan Hendrycks
Jacob Steinhardt
AAML
19
49
0
03 May 2019
You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle
Dinghuai Zhang
Tianyuan Zhang
Yiping Lu
Zhanxing Zhu
Bin Dong
AAML
39
357
0
02 May 2019
Adversarial Training with Voronoi Constraints
Marc Khoury
Dylan Hadfield-Menell
AAML
28
24
0
02 May 2019
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
Yandong Li
Lijun Li
Liqiang Wang
Tong Zhang
Boqing Gong
AAML
18
245
0
01 May 2019
Dropping Pixels for Adversarial Robustness
Hossein Hosseini
Sreeram Kannan
Radha Poovendran
14
16
0
01 May 2019
Previous
1
2
3
...
121
122
123
...
129
130
131
Next