Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1706.06083
Cited By
Towards Deep Learning Models Resistant to Adversarial Attacks
19 June 2017
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards Deep Learning Models Resistant to Adversarial Attacks"
50 / 6,520 papers shown
Title
The Conditional Entropy Bottleneck
Ian S. Fischer
OOD
29
117
0
13 Feb 2020
Predictive Power of Nearest Neighbors Algorithm under Random Perturbation
Yue Xing
Qifan Song
Guang Cheng
12
6
0
13 Feb 2020
Stabilizing Differentiable Architecture Search via Perturbation-based Regularization
Xiangning Chen
Cho-Jui Hsieh
37
203
0
12 Feb 2020
Machine Learning in Python: Main developments and technology trends in data science, machine learning, and artificial intelligence
S. Raschka
Joshua Patterson
Corey J. Nolet
AI4CE
39
486
0
12 Feb 2020
More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models
Lin Chen
Yifei Min
Mingrui Zhang
Amin Karbasi
OOD
38
64
0
11 Feb 2020
Adversarial Robustness for Code
Pavol Bielik
Martin Vechev
AAML
22
89
0
11 Feb 2020
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations
Florian Tramèr
Jens Behrmann
Nicholas Carlini
Nicolas Papernot
J. Jacobsen
AAML
SILM
6
92
0
11 Feb 2020
Robustness of Bayesian Neural Networks to Gradient-Based Attacks
Ginevra Carbone
Matthew Wicker
Luca Laurenti
A. Patané
Luca Bortolussi
G. Sanguinetti
AAML
40
77
0
11 Feb 2020
Improving the affordability of robustness training for DNNs
Sidharth Gupta
Parijat Dube
Ashish Verma
AAML
27
15
0
11 Feb 2020
Generalised Lipschitz Regularisation Equals Distributional Robustness
Zac Cranko
Zhan Shi
Xinhua Zhang
Richard Nock
Simon Kornblith
OOD
26
20
0
11 Feb 2020
Playing to Learn Better: Repeated Games for Adversarial Learning with Multiple Classifiers
P. Dasgupta
J. B. Collins
Michael McCarrick
AAML
27
1
0
10 Feb 2020
Adversarial Data Encryption
Yingdong Hu
Liang Zhang
W. Shan
Xiaoxiao Qin
Jinghuai Qi
Zhenzhou Wu
Yang Yuan
FedML
MedIm
23
0
0
10 Feb 2020
Category-wise Attack: Transferable Adversarial Examples for Anchor Free Object Detection
Quanyu Liao
Xin Wang
Bin Kong
Siwei Lyu
Youbing Yin
Qi Song
Xi Wu
AAML
20
8
0
10 Feb 2020
Random Smoothing Might be Unable to Certify
ℓ
∞
\ell_\infty
ℓ
∞
Robustness for High-Dimensional Images
Avrim Blum
Travis Dick
N. Manoj
Hongyang R. Zhang
AAML
31
79
0
10 Feb 2020
Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing
Jinyuan Jia
Binghui Wang
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
94
84
0
09 Feb 2020
Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness
Aounon Kumar
Alexander Levine
Tom Goldstein
Soheil Feizi
22
94
0
08 Feb 2020
Attacking Optical Character Recognition (OCR) Systems with Adversarial Watermarks
Lu Chen
Wenyuan Xu
AAML
27
21
0
08 Feb 2020
Analysis of Random Perturbations for Robust Convolutional Neural Networks
Adam Dziedzic
S. Krishnan
OOD
AAML
29
1
0
08 Feb 2020
Semantic Robustness of Models of Source Code
Goutham Ramakrishnan
Jordan Henkel
Zi Wang
Aws Albarghouthi
S. Jha
Thomas W. Reps
SILM
AAML
47
97
0
07 Feb 2020
Renofeation: A Simple Transfer Learning Method for Improved Adversarial Robustness
Ting-Wu Chin
Cha Zhang
Diana Marculescu
AAML
18
1
0
07 Feb 2020
Assessing the Adversarial Robustness of Monte Carlo and Distillation Methods for Deep Bayesian Neural Network Classification
Meet P. Vadera
Satya Narayan Shukla
B. Jalaeian
Benjamin M. Marlin
AAML
BDL
23
6
0
07 Feb 2020
RAID: Randomized Adversarial-Input Detection for Neural Networks
Hasan Ferit Eniser
M. Christakis
Valentin Wüstholz
AAML
25
15
0
07 Feb 2020
AI-GAN: Attack-Inspired Generation of Adversarial Examples
Tao Bai
Jun Zhao
Jinlin Zhu
Shoudong Han
Jiefeng Chen
Yue Liu
Alex C. Kot
GAN
39
48
0
06 Feb 2020
Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study
David Mickisch
F. Assion
Florens Greßner
W. Günther
M. Motta
AAML
19
34
0
05 Feb 2020
Minimax Defense against Gradient-based Adversarial Attacks
Blerta Lindqvist
R. Izmailov
AAML
19
0
0
04 Feb 2020
Regularizers for Single-step Adversarial Training
S. VivekB.
R. Venkatesh Babu
AAML
30
7
0
03 Feb 2020
Towards Sharper First-Order Adversary with Quantized Gradients
Zhuanghua Liu
Ivor W. Tsang
AAML
27
0
0
01 Feb 2020
Last Iterate is Slower than Averaged Iterate in Smooth Convex-Concave Saddle Point Problems
Noah Golowich
S. Pattathil
C. Daskalakis
Asuman Ozdaglar
6
103
0
31 Jan 2020
On the Information Bottleneck Problems: Models, Connections, Applications and Information Theoretic Views
Milad Sefidgaran
Iñaki Estella Aguerri
S. Shamai
8
89
0
31 Jan 2020
Local intrinsic dimensionality estimators based on concentration of measure
Jonathan Bac
A. Zinovyev
6
9
0
31 Jan 2020
Tiny noise, big mistakes: Adversarial perturbations induce errors in Brain-Computer Interface spellers
Xiao Zhang
Dongrui Wu
L. Ding
Hanbin Luo
Chin-Teng Lin
T. Jung
Ricardo Chavarriaga
AAML
22
59
0
30 Jan 2020
Adversarial Attacks on Convolutional Neural Networks in Facial Recognition Domain
Yigit Can Alparslan
Ken Alparslan
Jeremy Keim-Shenk
S. Khade
Rachel Greenstadt
AAML
6
14
0
30 Jan 2020
REST: Robust and Efficient Neural Networks for Sleep Monitoring in the Wild
Rahul Duggal
Scott Freitas
Cao Xiao
Duen Horng Chau
Jimeng Sun
28
22
0
29 Jan 2020
Explaining with Counter Visual Attributes and Examples
Sadaf Gulshad
A. Smeulders
XAI
FAtt
AAML
30
15
0
27 Jan 2020
Weighted Average Precision: Adversarial Example Detection in the Visual Perception of Autonomous Vehicles
Yilan Li
Senem Velipasalar
AAML
14
7
0
25 Jan 2020
Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks
Oliver Willers
Sebastian Sudholt
Shervin Raafatnia
Stephanie Abrecht
28
80
0
22 Jan 2020
Zeroth-Order Algorithms for Nonconvex Minimax Problems with Improved Complexities
Zhongruo Wang
Krishnakumar Balasubramanian
Shiqian Ma
Meisam Razaviyayn
32
25
0
22 Jan 2020
GhostImage: Remote Perception Attacks against Camera-based Image Classification Systems
Yanmao Man
Ming Li
Ryan M. Gerdes
AAML
27
8
0
21 Jan 2020
HRFA: High-Resolution Feature-based Attack
Jia Cai
Sizhe Chen
Peidong Zhang
Chengjin Sun
Xiaolin Huang
AAML
35
0
0
21 Jan 2020
Code-Bridged Classifier (CBC): A Low or Negative Overhead Defense for Making a CNN Classifier Robust Against Adversarial Attacks
F. Behnia
Ali Mirzaeian
Mohammad Sabokrou
S. Manoj
T. Mohsenin
Khaled N. Khasawneh
Liang Zhao
Houman Homayoun
Avesta Sasan
AAML
16
15
0
16 Jan 2020
A simple way to make neural networks robust against diverse image corruptions
E. Rusak
Lukas Schott
Roland S. Zimmermann
Julian Bitterwolf
Oliver Bringmann
Matthias Bethge
Wieland Brendel
21
64
0
16 Jan 2020
A Little Fog for a Large Turn
Harshitha Machiraju
V. Balasubramanian
AAML
17
9
0
16 Jan 2020
Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet
Sizhe Chen
Zhengbao He
Chengjin Sun
Jie Yang
Xiaolin Huang
AAML
31
104
0
16 Jan 2020
Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning
R. Schuster
Tal Schuster
Yoav Meri
Vitaly Shmatikov
AAML
6
39
0
14 Jan 2020
Smooth markets: A basic mechanism for organizing gradient-based learners
David Balduzzi
Wojciech M. Czarnecki
Thomas W. Anthony
I. Gemp
Edward Hughes
Joel Z Leibo
Georgios Piliouras
T. Graepel
22
15
0
14 Jan 2020
Advbox: a toolbox to generate adversarial examples that fool neural networks
Dou Goodman
Xin Hao
Yang Wang
Yuesheng Wu
Junfeng Xiong
Huan Zhang
AAML
17
53
0
13 Jan 2020
On the Resilience of Biometric Authentication Systems against Random Inputs
Benjamin Zi Hao Zhao
Hassan Jameel Asghar
M. Kâafar
AAML
39
23
0
13 Jan 2020
Fast is better than free: Revisiting adversarial training
Eric Wong
Leslie Rice
J. Zico Kolter
AAML
OOD
99
1,161
0
12 Jan 2020
Sparse Black-box Video Attack with Reinforcement Learning
Xingxing Wei
Huanqian Yan
Yue Liu
AAML
36
49
0
11 Jan 2020
Guess First to Enable Better Compression and Adversarial Robustness
Sicheng Zhu
Bang An
Shiyu Niu
AAML
18
0
0
10 Jan 2020
Previous
1
2
3
...
112
113
114
...
129
130
131
Next