Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1706.06083
Cited By
Towards Deep Learning Models Resistant to Adversarial Attacks
19 June 2017
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards Deep Learning Models Resistant to Adversarial Attacks"
50 / 6,533 papers shown
Title
Certified Defenses for Adversarial Patches
Ping Yeh-Chiang
Renkun Ni
Ahmed Abdelkader
Chen Zhu
Christoph Studer
Tom Goldstein
AAML
21
171
0
14 Mar 2020
On the benefits of defining vicinal distributions in latent space
Puneet Mangla
Vedant Singh
Shreyas Jayant Havaldar
V. Balasubramanian
AAML
19
3
0
14 Mar 2020
Minimum-Norm Adversarial Examples on KNN and KNN-Based Models
Chawin Sitawarin
David Wagner
AAML
15
20
0
14 Mar 2020
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation
Xiaogang Xu
Hengshuang Zhao
Jiaya Jia
AAML
20
38
0
14 Mar 2020
When are Non-Parametric Methods Robust?
Robi Bhattacharjee
Kamalika Chaudhuri
AAML
49
27
0
13 Mar 2020
ARAE: Adversarially Robust Training of Autoencoders Improves Novelty Detection
Mohammadreza Salehi
Atrin Arya
Barbod Pajoum
Mohammad Otoofi
Amirreza Shaeiri
M. Rohban
Hamid R. Rabiee
AAML
36
62
0
12 Mar 2020
Using an ensemble color space model to tackle adversarial examples
Shreyank N. Gowda
C. Yuan
AAML
19
1
0
10 Mar 2020
A Survey of Adversarial Learning on Graphs
Liang Chen
Jintang Li
Jiaying Peng
Tao Xie
Zengxu Cao
Kun Xu
Xiangnan He
Zibin Zheng
Bingzhe Wu
AAML
24
84
0
10 Mar 2020
Manifold Regularization for Locally Stable Deep Neural Networks
Charles Jin
Martin Rinard
AAML
28
15
0
09 Mar 2020
An Empirical Evaluation on Robustness and Uncertainty of Regularization Methods
Sanghyuk Chun
Seong Joon Oh
Sangdoo Yun
Dongyoon Han
Junsuk Choe
Y. Yoo
AAML
OOD
350
53
0
09 Mar 2020
No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting with Adversarial Attacks
Siqi Liu
A. Setio
Florin-Cristian Ghesu
Eli Gibson
Sasa Grbic
Bogdan Georgescu
Dorin Comaniciu
AAML
OOD
44
40
0
08 Mar 2020
Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles
Ranjie Duan
Xingjun Ma
Yisen Wang
James Bailey
•. A. K. Qin
Yun Yang
AAML
169
224
0
08 Mar 2020
Adversarial Machine Learning: Bayesian Perspectives
D. Insua
Roi Naveiro
Víctor Gallego
Jason Poulos
AAML
16
18
0
07 Mar 2020
Defense against adversarial attacks on spoofing countermeasures of ASV
Haibin Wu
Songxiang Liu
Helen Meng
Hung-yi Lee
AAML
98
53
0
06 Mar 2020
Exploiting Verified Neural Networks via Floating Point Numerical Error
Kai Jia
Martin Rinard
AAML
39
34
0
06 Mar 2020
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Bai Li
Shiqi Wang
Yunhan Jia
Yantao Lu
Zhenyu Zhong
Lawrence Carin
Suman Jana
AAML
41
14
0
06 Mar 2020
Search Space of Adversarial Perturbations against Image Filters
D. D. Thang
Toshihiro Matsui
AAML
14
1
0
05 Mar 2020
Confusing and Detecting ML Adversarial Attacks with Injected Attractors
Jiyi Zhang
E. Chang
H. Lee
AAML
32
1
0
05 Mar 2020
Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization
Saehyung Lee
Hyungyu Lee
Sungroh Yoon
AAML
169
113
0
05 Mar 2020
A Closer Look at Accuracy vs. Robustness
Yao-Yuan Yang
Cyrus Rashtchian
Hongyang R. Zhang
Ruslan Salakhutdinov
Kamalika Chaudhuri
OOD
82
26
0
05 Mar 2020
SAM: The Sensitivity of Attribution Methods to Hyperparameters
Naman Bansal
Chirag Agarwal
Anh Nguyen
FAtt
21
0
0
04 Mar 2020
Colored Noise Injection for Training Adversarially Robust Neural Networks
Evgenii Zheltonozhskii
Chaim Baskin
Yaniv Nemcovsky
Brian Chmiel
A. Mendelson
A. Bronstein
AAML
22
5
0
04 Mar 2020
Deep Neural Network Perception Models and Robust Autonomous Driving Systems
M. Shafiee
Ahmadreza Jeddi
Amir Nazemi
Paul Fieguth
A. Wong
OOD
41
15
0
04 Mar 2020
Metrics and methods for robustness evaluation of neural networks with generative models
Igor Buzhinsky
Arseny Nerinovsky
S. Tripakis
AAML
42
25
0
04 Mar 2020
Denoised Smoothing: A Provable Defense for Pretrained Classifiers
Hadi Salman
Mingjie Sun
Greg Yang
Ashish Kapoor
J. Zico Kolter
45
23
0
04 Mar 2020
Double Backpropagation for Training Autoencoders against Adversarial Attack
Chengjin Sun
Sizhe Chen
Xiaolin Huang
SILM
AAML
45
5
0
04 Mar 2020
Type I Attack for Generative Models
Chengjin Sun
Sizhe Chen
Jia Cai
Xiaolin Huang
AAML
30
10
0
04 Mar 2020
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Francesco Croce
Matthias Hein
AAML
96
1,814
0
03 Mar 2020
Analyzing Accuracy Loss in Randomized Smoothing Defenses
Yue Gao
Harrison Rosenberg
Kassem Fawaz
S. Jha
Justin Hsu
AAML
24
6
0
03 Mar 2020
Disrupting Deepfakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems
Nataniel Ruiz
Sarah Adel Bargal
Stan Sclaroff
PICV
AAML
25
119
0
03 Mar 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OOD
AAML
77
63
0
02 Mar 2020
Structured Prediction with Partial Labelling through the Infimum Loss
Vivien A. Cabannes
Alessandro Rudi
Francis R. Bach
6
40
0
02 Mar 2020
Out-of-Distribution Generalization via Risk Extrapolation (REx)
David M. Krueger
Ethan Caballero
J. Jacobsen
Amy Zhang
Jonathan Binas
Dinghuai Zhang
Rémi Le Priol
Aaron Courville
OOD
222
916
0
02 Mar 2020
Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies
Wei Jin
Yaxin Li
Han Xu
Yiqi Wang
Shuiwang Ji
Charu C. Aggarwal
Jiliang Tang
AAML
GNN
34
103
0
02 Mar 2020
Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets
Thu Dinh
Bao Wang
Andrea L. Bertozzi
Stanley J. Osher
AAML
19
17
0
02 Mar 2020
Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models
Xiao Zhang
Jinghui Chen
Quanquan Gu
David Evans
31
17
0
01 Mar 2020
Improving Certified Robustness via Statistical Learning with Logical Reasoning
Zhuolin Yang
Zhikuan Zhao
Wei Ping
Jiawei Zhang
Linyi Li
...
Bojan Karlas
Ji Liu
Heng Guo
Ce Zhang
Yue Liu
AAML
38
13
0
28 Feb 2020
DROCC: Deep Robust One-Class Classification
Sachin Goyal
Aditi Raghunathan
Moksh Jain
H. Simhadri
Prateek Jain
VLM
33
161
0
28 Feb 2020
Are L2 adversarial examples intrinsically different?
Mingxuan Li
Jingyuan Wang
Yufan Wu
AAML
14
0
0
28 Feb 2020
Utilizing Network Properties to Detect Erroneous Inputs
Matt Gorbett
Nathaniel Blanchard
AAML
28
6
0
28 Feb 2020
Detecting Patch Adversarial Attacks with Image Residuals
Marius Arvinte
Ahmed H. Tewfik
S. Vishwanath
AAML
17
5
0
28 Feb 2020
TSS: Transformation-Specific Smoothing for Robustness Certification
Linyi Li
Maurice Weber
Xiaojun Xu
Luka Rimanic
B. Kailkhura
Tao Xie
Ce Zhang
Yue Liu
AAML
43
56
0
27 Feb 2020
On Isometry Robustness of Deep 3D Point Cloud Models under Adversarial Attacks
Yue Zhao
Yuwei Wu
Caihua Chen
A. Lim
3DPC
21
70
0
27 Feb 2020
Entangled Watermarks as a Defense against Model Extraction
Hengrui Jia
Christopher A. Choquette-Choo
Varun Chandrasekaran
Nicolas Papernot
WaLM
AAML
20
218
0
27 Feb 2020
Defense-PointNet: Protecting PointNet Against Adversarial Attacks
Yu Zhang
G. Liang
Tawfiq Salem
Nathan Jacobs
AAML
3DPC
20
27
0
27 Feb 2020
Improving Robustness of Deep-Learning-Based Image Reconstruction
Ankit Raj
Y. Bresler
Yue Liu
OOD
AAML
34
50
0
26 Feb 2020
Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization
Sicheng Zhu
Xiao Zhang
David Evans
SSL
OOD
16
27
0
26 Feb 2020
Revisiting Ensembles in an Adversarial Context: Improving Natural Accuracy
Aditya Saligrama
Guillaume Leclerc
AAML
16
1
0
26 Feb 2020
Overfitting in adversarially robust deep learning
Leslie Rice
Eric Wong
Zico Kolter
47
792
0
26 Feb 2020
Randomization matters. How to defend against strong adversarial attacks
Rafael Pinot
Raphael Ettedgui
Geovani Rizk
Y. Chevaleyre
Jamal Atif
AAML
20
58
0
26 Feb 2020
Previous
1
2
3
...
110
111
112
...
129
130
131
Next