ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.06083
  4. Cited By
Towards Deep Learning Models Resistant to Adversarial Attacks
v1v2v3v4 (latest)

Towards Deep Learning Models Resistant to Adversarial Attacks

19 June 2017
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
    SILMOOD
ArXiv (abs)PDFHTMLGithub (752★)

Papers citing "Towards Deep Learning Models Resistant to Adversarial Attacks"

50 / 6,612 papers shown
Title
Rearchitecting Classification Frameworks For Increased Robustness
Rearchitecting Classification Frameworks For Increased Robustness
Varun Chandrasekaran
Brian Tang
Nicolas Papernot
Kassem Fawaz
S. Jha
Xi Wu
AAMLOOD
100
8
0
26 May 2019
Generalizable Adversarial Attacks with Latent Variable Perturbation
  Modelling
Generalizable Adversarial Attacks with Latent Variable Perturbation Modelling
A. Bose
Andre Cianflone
William L. Hamilton
OODAAML
75
7
0
26 May 2019
State-Reification Networks: Improving Generalization by Modeling the
  Distribution of Hidden Representations
State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations
Alex Lamb
Jonathan Binas
Anirudh Goyal
Sandeep Subramanian
Ioannis Mitliagkas
Denis Kazakov
Yoshua Bengio
Michael C. Mozer
OOD
45
3
0
26 May 2019
Purifying Adversarial Perturbation with Adversarially Trained
  Auto-encoders
Purifying Adversarial Perturbation with Adversarially Trained Auto-encoders
Hebi Li
Qi Xiao
Shixin Tian
Jin Tian
AAML
68
4
0
26 May 2019
Adversarial Distillation for Ordered Top-k Attacks
Adversarial Distillation for Ordered Top-k Attacks
Zekun Zhang
Tianfu Wu
AAML
44
2
0
25 May 2019
Enhancing Adversarial Defense by k-Winners-Take-All
Enhancing Adversarial Defense by k-Winners-Take-All
Chang Xiao
Peilin Zhong
Changxi Zheng
AAML
80
99
0
25 May 2019
Regula Sub-rosa: Latent Backdoor Attacks on Deep Neural Networks
Regula Sub-rosa: Latent Backdoor Attacks on Deep Neural Networks
Yuanshun Yao
Huiying Li
Haitao Zheng
Ben Y. Zhao
AAML
55
13
0
24 May 2019
Privacy Risks of Securing Machine Learning Models against Adversarial
  Examples
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILMMIACVAAML
94
248
0
24 May 2019
Robustness to Adversarial Perturbations in Learning from Incomplete Data
Robustness to Adversarial Perturbations in Learning from Incomplete Data
Amir Najafi
S. Maeda
Masanori Koyama
Takeru Miyato
OOD
92
131
0
24 May 2019
Robust Attribution Regularization
Robust Attribution Regularization
Jiefeng Chen
Xi Wu
Vaibhav Rastogi
Yingyu Liang
S. Jha
OOD
59
83
0
23 May 2019
Thwarting finite difference adversarial attacks with output
  randomization
Thwarting finite difference adversarial attacks with output randomization
Haidar Khan
Daniel Park
Azer Khan
B. Yener
SILMAAML
52
0
0
23 May 2019
Interpreting Adversarially Trained Convolutional Neural Networks
Interpreting Adversarially Trained Convolutional Neural Networks
Tianyuan Zhang
Zhanxing Zhu
AAMLGANFAtt
125
161
0
23 May 2019
Adversarially Robust Distillation
Adversarially Robust Distillation
Micah Goldblum
Liam H. Fowl
Soheil Feizi
Tom Goldstein
AAML
94
213
0
23 May 2019
A Direct Approach to Robust Deep Learning Using Adversarial Networks
A Direct Approach to Robust Deep Learning Using Adversarial Networks
Huaxia Wang
Chun-Nam Yu
GANAAMLOOD
76
77
0
23 May 2019
Convergence and Margin of Adversarial Training on Separable Data
Convergence and Margin of Adversarial Training on Separable Data
Zachary B. Charles
Shashank Rajput
S. Wright
Dimitris Papailiopoulos
AAML
71
17
0
22 May 2019
Adversarially robust transfer learning
Adversarially robust transfer learning
Ali Shafahi
Parsa Saadatpanah
Chen Zhu
Amin Ghiasi
Christoph Studer
David Jacobs
Tom Goldstein
OOD
52
117
0
20 May 2019
CERTIFAI: Counterfactual Explanations for Robustness, Transparency,
  Interpretability, and Fairness of Artificial Intelligence models
CERTIFAI: Counterfactual Explanations for Robustness, Transparency, Interpretability, and Fairness of Artificial Intelligence models
Shubham Sharma
Jette Henderson
Joydeep Ghosh
85
88
0
20 May 2019
Taking Care of The Discretization Problem: A Comprehensive Study of the
  Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer
  Domain
Taking Care of The Discretization Problem: A Comprehensive Study of the Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer Domain
Lei Bu
Yuchao Duan
Fu Song
Zhe Zhao
AAML
114
18
0
19 May 2019
What Do Adversarially Robust Models Look At?
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
32
5
0
19 May 2019
Percival: Making In-Browser Perceptual Ad Blocking Practical With Deep
  Learning
Percival: Making In-Browser Perceptual Ad Blocking Practical With Deep Learning
Z. Din
P. Tigas
Samuel T. King
B. Livshits
VLM
160
29
0
17 May 2019
Simple Black-box Adversarial Attacks
Simple Black-box Adversarial Attacks
Chuan Guo
Jacob R. Gardner
Yurong You
A. Wilson
Kilian Q. Weinberger
AAML
78
582
0
17 May 2019
A critique of the DeepSec Platform for Security Analysis of Deep
  Learning Models
A critique of the DeepSec Platform for Security Analysis of Deep Learning Models
Nicholas Carlini
ELM
68
14
0
17 May 2019
Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial
  Optimization
Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization
Seungyong Moon
Gaon An
Hyun Oh Song
AAMLMLAU
88
136
0
16 May 2019
On Norm-Agnostic Robustness of Adversarial Training
On Norm-Agnostic Robustness of Adversarial Training
Bai Li
Changyou Chen
Wenlin Wang
Lawrence Carin
OODSILM
68
7
0
15 May 2019
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Chen Zhu
Wenjie Huang
Ali Shafahi
Hengduo Li
Gavin Taylor
Christoph Studer
Tom Goldstein
125
286
0
15 May 2019
Adversarial Examples for Electrocardiograms
Adversarial Examples for Electrocardiograms
Xintian Han
Yuxuan Hu
L. Foschini
L. Chinitz
Lior Jankelson
Rajesh Ranganath
AAMLMedIm
49
4
0
13 May 2019
Harnessing the Vulnerability of Latent Layers in Adversarially Trained
  Models
Harnessing the Vulnerability of Latent Layers in Adversarially Trained Models
M. Singh
Abhishek Sinha
Nupur Kumari
Harshitha Machiraju
Balaji Krishnamurthy
V. Balasubramanian
AAML
56
61
0
13 May 2019
Analyzing Adversarial Attacks Against Deep Learning for Intrusion
  Detection in IoT Networks
Analyzing Adversarial Attacks Against Deep Learning for Intrusion Detection in IoT Networks
Olakunle Ibitoye
Omair Shafiq
Ashraf Matrawy
61
164
0
13 May 2019
CutMix: Regularization Strategy to Train Strong Classifiers with
  Localizable Features
CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features
Sangdoo Yun
Dongyoon Han
Seong Joon Oh
Sanghyuk Chun
Junsuk Choe
Y. Yoo
OOD
660
4,833
0
13 May 2019
Moving Target Defense for Deep Visual Sensing against Adversarial
  Examples
Moving Target Defense for Deep Visual Sensing against Adversarial Examples
Qun Song
Zhenyu Yan
Rui Tan
AAML
47
21
0
11 May 2019
Interpreting and Evaluating Neural Network Robustness
Interpreting and Evaluating Neural Network Robustness
Fuxun Yu
Zhuwei Qin
Chenchen Liu
Liang Zhao
Yanzhi Wang
Xiang Chen
AAML
57
56
0
10 May 2019
On the Connection Between Adversarial Robustness and Saliency Map
  Interpretability
On the Connection Between Adversarial Robustness and Saliency Map Interpretability
Christian Etmann
Sebastian Lunz
Peter Maass
Carola-Bibiane Schönlieb
AAMLFAtt
63
162
0
10 May 2019
Exploring the Hyperparameter Landscape of Adversarial Robustness
Exploring the Hyperparameter Landscape of Adversarial Robustness
Evelyn Duesterwald
Anupama Murthi
Ganesh Venkataraman
M. Sinn
Deepak Vijaykeerthy
AAML
52
7
0
09 May 2019
Learning Interpretable Features via Adversarially Robust Optimization
Learning Interpretable Features via Adversarially Robust Optimization
Ashkan Khakzar
Shadi Albarqouni
Nassir Navab
MedImFAtt
55
14
0
09 May 2019
Adversarial Defense Framework for Graph Neural Network
Adversarial Defense Framework for Graph Neural Network
Shen Wang
Zhengzhang Chen
Jingchao Ni
Xiao Yu
Zhichun Li
Haifeng Chen
Philip S. Yu
AAMLGNN
71
28
0
09 May 2019
ROSA: Robust Salient Object Detection against Adversarial Attacks
ROSA: Robust Salient Object Detection against Adversarial Attacks
Haofeng Li
Guanbin Li
Yizhou Yu
AAML
70
29
0
09 May 2019
Does Data Augmentation Lead to Positive Margin?
Does Data Augmentation Lead to Positive Margin?
Shashank Rajput
Zhili Feng
Zachary B. Charles
Po-Ling Loh
Dimitris Papailiopoulos
86
38
0
08 May 2019
An Empirical Evaluation of Adversarial Robustness under Transfer
  Learning
An Empirical Evaluation of Adversarial Robustness under Transfer Learning
Todor Davchev
Timos Korres
Stathi Fotiadis
N. Antonopoulos
S. Ramamoorthy
AAML
38
0
0
07 May 2019
Adaptive Generation of Unrestricted Adversarial Inputs
Adaptive Generation of Unrestricted Adversarial Inputs
Isaac Dunn
Hadrien Pouget
T. Melham
Daniel Kroening
AAML
61
7
0
07 May 2019
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
122
1,846
0
06 May 2019
Batch Normalization is a Cause of Adversarial Vulnerability
Batch Normalization is a Cause of Adversarial Vulnerability
A. Galloway
A. Golubeva
T. Tanay
M. Moussa
Graham W. Taylor
ODLAAML
84
80
0
06 May 2019
Better the Devil you Know: An Analysis of Evasion Attacks using
  Out-of-Distribution Adversarial Examples
Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples
Vikash Sehwag
A. Bhagoji
Liwei Song
Chawin Sitawarin
Daniel Cullina
M. Chiang
Prateek Mittal
OODD
79
26
0
05 May 2019
CharBot: A Simple and Effective Method for Evading DGA Classifiers
CharBot: A Simple and Effective Method for Evading DGA Classifiers
Jonathan Peck
Claire Nie
R. Sivaguru
Charles Grumer
Femi G. Olumofin
Bin Yu
A. Nascimento
Martine De Cock
AAML
48
44
0
03 May 2019
Transfer of Adversarial Robustness Between Perturbation Types
Transfer of Adversarial Robustness Between Perturbation Types
Daniel Kang
Yi Sun
Tom B. Brown
Dan Hendrycks
Jacob Steinhardt
AAML
71
49
0
03 May 2019
You Only Propagate Once: Accelerating Adversarial Training via Maximal
  Principle
You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle
Dinghuai Zhang
Tianyuan Zhang
Yiping Lu
Zhanxing Zhu
Bin Dong
AAML
130
362
0
02 May 2019
Adversarial Training with Voronoi Constraints
Adversarial Training with Voronoi Constraints
Marc Khoury
Dylan Hadfield-Menell
AAML
63
24
0
02 May 2019
NATTACK: Learning the Distributions of Adversarial Examples for an
  Improved Black-Box Attack on Deep Neural Networks
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
Yandong Li
Lijun Li
Liqiang Wang
Tong Zhang
Boqing Gong
AAML
86
245
0
01 May 2019
Dropping Pixels for Adversarial Robustness
Dropping Pixels for Adversarial Robustness
Hossein Hosseini
Sreeram Kannan
Radha Poovendran
44
16
0
01 May 2019
A scalable saliency-based Feature selection method with instance level
  information
A scalable saliency-based Feature selection method with instance level information
Brais Cancela
V. Bolón-Canedo
Amparo Alonso-Betanzos
João Gama
FAtt
62
13
0
30 Apr 2019
Detecting Adversarial Examples through Nonlinear Dimensionality
  Reduction
Detecting Adversarial Examples through Nonlinear Dimensionality Reduction
Francesco Crecchi
D. Bacciu
Battista Biggio
AAML
83
10
0
30 Apr 2019
Previous
123...123124125...131132133
Next