Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1706.06083
Cited By
Towards Deep Learning Models Resistant to Adversarial Attacks
19 June 2017
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards Deep Learning Models Resistant to Adversarial Attacks"
50 / 6,528 papers shown
Title
The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt Generalization
Yifei Min
Lin Chen
Amin Karbasi
AAML
54
69
0
25 Feb 2020
Towards an Efficient and General Framework of Robust Training for Graph Neural Networks
Kaidi Xu
Sijia Liu
Pin-Yu Chen
Mengshu Sun
Caiwen Ding
B. Kailkhura
Xinyu Lin
OOD
AAML
27
7
0
25 Feb 2020
Wireless Fingerprinting via Deep Learning: The Impact of Confounding Factors
Metehan Cekic
S. Gopalakrishnan
Upamanyu Madhow
20
11
0
25 Feb 2020
(De)Randomized Smoothing for Certifiable Defense against Patch Attacks
Alexander Levine
Soheil Feizi
AAML
30
148
0
25 Feb 2020
Understanding and Mitigating the Tradeoff Between Robustness and Accuracy
Aditi Raghunathan
Sang Michael Xie
Fanny Yang
John C. Duchi
Percy Liang
AAML
56
225
0
25 Feb 2020
Towards Backdoor Attacks and Defense in Robust Machine Learning Models
E. Soremekun
Sakshi Udeshi
Sudipta Chattopadhyay
AAML
22
13
0
25 Feb 2020
I Am Going MAD: Maximum Discrepancy Competition for Comparing Classifiers Adaptively
Haotao Wang
Tianlong Chen
Zhangyang Wang
Kede Ma
VLM
25
20
0
25 Feb 2020
Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color Space
Camilo Pestana
Naveed Akhtar
Wei Liu
D. Glance
Ajmal Mian
AAML
34
10
0
25 Feb 2020
HYDRA: Pruning Adversarially Robust Neural Networks
Vikash Sehwag
Shiqi Wang
Prateek Mittal
Suman Jana
AAML
22
25
0
24 Feb 2020
Precise Tradeoffs in Adversarial Training for Linear Regression
Adel Javanmard
Mahdi Soltanolkotabi
Hamed Hassani
AAML
13
105
0
24 Feb 2020
Lagrangian Decomposition for Neural Network Verification
Rudy Bunel
Alessandro De Palma
Alban Desmaison
Krishnamurthy Dvijotham
Pushmeet Kohli
Philip Torr
M. P. Kumar
19
50
0
24 Feb 2020
Self-Adaptive Training: beyond Empirical Risk Minimization
Lang Huang
Chaoning Zhang
Hongyang R. Zhang
NoLa
32
199
0
24 Feb 2020
Learning Certified Individually Fair Representations
Anian Ruoss
Mislav Balunović
Marc Fischer
Martin Vechev
FaML
15
92
0
24 Feb 2020
Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks
Agustinus Kristiadi
Matthias Hein
Philipp Hennig
BDL
UQCV
46
279
0
24 Feb 2020
Towards Rapid and Robust Adversarial Training with One-Step Attacks
Leo Schwinn
René Raab
Björn Eskofier
AAML
33
6
0
24 Feb 2020
Utilizing a null class to restrict decision spaces and defend against neural network adversarial attacks
Matthew J. Roos
AAML
9
2
0
24 Feb 2020
Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference
Ting-Kuei Hu
Tianlong Chen
Haotao Wang
Zhangyang Wang
OOD
AAML
3DH
20
84
0
24 Feb 2020
Neuron Shapley: Discovering the Responsible Neurons
Amirata Ghorbani
James Zou
FAtt
TDI
25
108
0
23 Feb 2020
Non-Intrusive Detection of Adversarial Deep Learning Attacks via Observer Networks
K. Sivamani
R. Sahay
Aly El Gamal
AAML
21
3
0
22 Feb 2020
Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers
Chen Zhu
Renkun Ni
Ping Yeh-Chiang
Hengduo Li
Furong Huang
Tom Goldstein
31
5
0
22 Feb 2020
Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition
Ziwen He
Wei Wang
Jing Dong
Tieniu Tan
AAML
27
23
0
22 Feb 2020
Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples
Guanxiong Liu
Issa M. Khalil
Abdallah Khreishah
AAML
22
19
0
22 Feb 2020
Global Convergence and Variance-Reduced Optimization for a Class of Nonconvex-Nonconcave Minimax Problems
Junchi Yang
Negar Kiyavash
Niao He
53
83
0
22 Feb 2020
Polarizing Front Ends for Robust CNNs
Can Bakiskan
S. Gopalakrishnan
Metehan Cekic
Upamanyu Madhow
Ramtin Pedarsani
AAML
17
4
0
22 Feb 2020
Robustness to Programmable String Transformations via Augmented Abstract Training
Yuhao Zhang
Aws Albarghouthi
Loris Dántoni
26
17
0
22 Feb 2020
UnMask: Adversarial Detection and Defense Through Robust Feature Alignment
Scott Freitas
Shang-Tse Chen
Zijie J. Wang
Duen Horng Chau
AAML
26
23
0
21 Feb 2020
Robustness from Simple Classifiers
Sharon Qian
Dimitris Kalimeris
Gal Kaplun
Yaron Singer
AAML
13
1
0
21 Feb 2020
Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework
Dinghuai Zhang
Mao Ye
Chengyue Gong
Zhanxing Zhu
Qiang Liu
AAML
33
62
0
21 Feb 2020
MaxUp: A Simple Way to Improve Generalization of Neural Network Training
Chengyue Gong
Tongzheng Ren
Mao Ye
Qiang Liu
AAML
37
56
0
20 Feb 2020
Halpern Iteration for Near-Optimal and Parameter-Free Monotone Inclusion and Strong Solutions to Variational Inequalities
Jelena Diakonikolas
10
74
0
20 Feb 2020
Automatic Shortcut Removal for Self-Supervised Representation Learning
Matthias Minderer
Olivier Bachem
N. Houlsby
Michael Tschannen
SSL
20
73
0
20 Feb 2020
Towards Certifiable Adversarial Sample Detection
Ilia Shumailov
Yiren Zhao
Robert D. Mullins
Ross J. Anderson
AAML
13
13
0
20 Feb 2020
Boosting Adversarial Training with Hypersphere Embedding
Tianyu Pang
Xiao Yang
Yinpeng Dong
Kun Xu
Jun Zhu
Hang Su
AAML
33
154
0
20 Feb 2020
AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks
Tianlin Li
Siyue Wang
Pin-Yu Chen
Xinyu Lin
Peter Chin
AAML
24
3
0
19 Feb 2020
On Adaptive Attacks to Adversarial Example Defenses
Florian Tramèr
Nicholas Carlini
Wieland Brendel
Aleksander Madry
AAML
113
825
0
19 Feb 2020
Fawkes: Protecting Privacy against Unauthorized Deep Learning Models
Shawn Shan
Emily Wenger
Jiayun Zhang
Huiying Li
Haitao Zheng
Ben Y. Zhao
PICV
MU
36
24
0
19 Feb 2020
Randomized Smoothing of All Shapes and Sizes
Greg Yang
Tony Duan
J. E. Hu
Hadi Salman
Ilya P. Razenshteyn
Jungshian Li
AAML
32
209
0
19 Feb 2020
Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks
Tsubasa Takahashi
GNN
AAML
24
37
0
19 Feb 2020
The Microsoft Toolkit of Multi-Task Deep Neural Networks for Natural Language Understanding
Xiaodong Liu
Yu Wang
Jianshu Ji
Hao Cheng
Xueyun Zhu
...
Pengcheng He
Weizhu Chen
Hoifung Poon
Guihong Cao
Jianfeng Gao
AI4CE
31
60
0
19 Feb 2020
Block Switching: A Stochastic Approach for Deep Learning Security
Tianlin Li
Siyue Wang
Pin-Yu Chen
Xinyu Lin
S. Chin
AAML
11
13
0
18 Feb 2020
Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent
Pu Zhao
Pin-Yu Chen
Siyue Wang
Xinyu Lin
AAML
16
36
0
18 Feb 2020
Deflecting Adversarial Attacks
Yao Qin
Nicholas Frosst
Colin Raffel
G. Cottrell
Geoffrey E. Hinton
AAML
30
15
0
18 Feb 2020
TensorShield: Tensor-based Defense Against Adversarial Attacks on Images
Negin Entezari
Evangelos E. Papalexakis
AAML
16
6
0
18 Feb 2020
Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness
Huijie Feng
Chunpeng Wu
Guoyang Chen
Weifeng Zhang
Y. Ning
AAML
39
11
0
17 Feb 2020
GRAPHITE: Generating Automatic Physical Examples for Machine-Learning Attacks on Computer Vision Systems
Ryan Feng
Neal Mangaokar
Jiefeng Chen
Earlence Fernandes
S. Jha
Atul Prakash
OOD
AAML
14
10
0
17 Feb 2020
CAT: Customized Adversarial Training for Improved Robustness
Minhao Cheng
Qi Lei
Pin-Yu Chen
Inderjit Dhillon
Cho-Jui Hsieh
OOD
AAML
40
114
0
17 Feb 2020
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality
Yi Zhang
Orestis Plevrakis
S. Du
Xingguo Li
Zhao Song
Sanjeev Arora
34
51
0
16 Feb 2020
Blind Adversarial Network Perturbations
Milad Nasr
Alireza Bahramali
Amir Houmansadr
AAML
31
6
0
16 Feb 2020
Hold me tight! Influence of discriminative features on deep network boundaries
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
7
50
0
15 Feb 2020
Manifold-based Test Generation for Image Classifiers
Taejoon Byun
Abhishek Vijayakumar
Sanjai Rayadurgam
D. Cofer
20
9
0
15 Feb 2020
Previous
1
2
3
...
111
112
113
...
129
130
131
Next