Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1608.04644
Cited By
v1
v2 (latest)
Towards Evaluating the Robustness of Neural Networks
16 August 2016
Nicholas Carlini
D. Wagner
OOD
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Towards Evaluating the Robustness of Neural Networks"
50 / 4,017 papers shown
Title
Robust Synthesis of Adversarial Visual Examples Using a Deep Image Prior
Thomas Gittings
Steve A. Schneider
John Collomosse
AAML
61
10
0
03 Jul 2019
Treant: Training Evasion-Aware Decision Trees
Stefano Calzavara
Claudio Lucchese
Gabriele Tolomei
S. Abebe
S. Orlando
AAML
75
41
0
02 Jul 2019
Diminishing the Effect of Adversarial Perturbations via Refining Feature Representation
Nader Asadi
Amirm. Sarfi
Mehrdad Hosseinzadeh
Sahba Tahsini
M. Eftekhari
AAML
32
2
0
01 Jul 2019
Accurate, reliable and fast robustness evaluation
Wieland Brendel
Jonas Rauber
Matthias Kümmerer
Ivan Ustyuzhaninov
Matthias Bethge
AAML
OOD
97
113
0
01 Jul 2019
Robust and Resource Efficient Identification of Two Hidden Layer Neural Networks
M. Fornasier
T. Klock
Michael Rauchensteiner
83
18
0
30 Jun 2019
Fooling a Real Car with Adversarial Traffic Signs
N. Morgulis
Alexander Kreines
Shachar Mendelowitz
Yuval Weisglass
AAML
89
93
0
30 Jun 2019
Cellular State Transformations using Generative Adversarial Networks
Colin Targonski
Benjamin T. Shealy
M. C. Smith
F. Feltus
21
1
0
28 Jun 2019
Robustness Guarantees for Deep Neural Networks on Videos
Min Wu
Marta Z. Kwiatkowska
AAML
83
23
0
28 Jun 2019
Training individually fair ML models with Sensitive Subspace Robustness
Mikhail Yurochkin
Amanda Bower
Yuekai Sun
FaML
OOD
95
120
0
28 Jun 2019
Using Intuition from Empirical Properties to Simplify Adversarial Training Defense
Guanxiong Liu
Issa M. Khalil
Abdallah Khreishah
AAML
37
2
0
27 Jun 2019
Evolving Robust Neural Architectures to Defend from Adversarial Attacks
Shashank Kotyan
Danilo Vasconcellos Vargas
OOD
AAML
81
36
0
27 Jun 2019
Adversarial Robustness via Label-Smoothing
Morgane Goibert
Elvis Dohmatob
AAML
124
18
0
27 Jun 2019
Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness
Fanny Yang
Zuowen Wang
C. Heinze-Deml
138
42
0
26 Jun 2019
Defending Adversarial Attacks by Correcting logits
Yifeng Li
Lingxi Xie
Ya Zhang
Rui Zhang
Yanfeng Wang
Qi Tian
AAML
44
5
0
26 Jun 2019
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Soheil Kolouri
Aniruddha Saha
Hamed Pirsiavash
Heiko Hoffmann
AAML
94
237
0
26 Jun 2019
Quantitative Verification of Neural Networks And its Security Applications
Teodora Baluta
Shiqi Shen
Shweta Shinde
Kuldeep S. Meel
P. Saxena
AAML
89
105
0
25 Jun 2019
Machine Learning Testing: Survey, Landscapes and Horizons
Jie M. Zhang
Mark Harman
Lei Ma
Yang Liu
VLM
AILaw
110
757
0
19 Jun 2019
Cloud-based Image Classification Service Is Not Robust To Simple Transformations: A Forgotten Battlefield
Dou Goodman
Tao Wei
AAML
69
6
0
19 Jun 2019
SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing
Haonan Qiu
Chaowei Xiao
Lei Yang
Xinchen Yan
Honglak Lee
Yue Liu
AAML
71
172
0
19 Jun 2019
Global Adversarial Attacks for Assessing Deep Learning Robustness
Hanbin Hu
Mitt Shah
Jianhua Z. Huang
Peng Li
AAML
80
4
0
19 Jun 2019
Convergence of Adversarial Training in Overparametrized Neural Networks
Ruiqi Gao
Tianle Cai
Haochuan Li
Liwei Wang
Cho-Jui Hsieh
Jason D. Lee
AAML
117
109
0
19 Jun 2019
The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks
F. Assion
Peter Schlicht
Florens Greßner
W. Günther
Fabian Hüger
Nico M. Schmidt
Umair Rasheed
AAML
75
14
0
17 Jun 2019
Improving Black-box Adversarial Attacks with a Transfer-based Prior
Shuyu Cheng
Yinpeng Dong
Tianyu Pang
Hang Su
Jun Zhu
AAML
96
274
0
17 Jun 2019
Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing Too Much Accuracy
Alex Lamb
Vikas Verma
Kenji Kawaguchi
Alexander Matyasko
Savya Khosla
Arno Solin
Yoshua Bengio
AAML
74
100
0
16 Jun 2019
Defending Against Adversarial Attacks Using Random Forests
Yifan Ding
Liqiang Wang
Huan Zhang
Jinfeng Yi
Deliang Fan
Boqing Gong
AAML
67
14
0
16 Jun 2019
Representation Quality Of Neural Networks Links To Adversarial Attacks and Defences
Shashank Kotyan
Danilo Vasconcellos Vargas
Moe Matsuki
52
0
0
15 Jun 2019
Towards Stable and Efficient Training of Verifiably Robust Neural Networks
Huan Zhang
Hongge Chen
Chaowei Xiao
Sven Gowal
Robert Stanforth
Yue Liu
Duane S. Boning
Cho-Jui Hsieh
AAML
113
351
0
14 Jun 2019
Towards Compact and Robust Deep Neural Networks
Vikash Sehwag
Shiqi Wang
Prateek Mittal
Suman Jana
AAML
82
40
0
14 Jun 2019
Effectiveness of Distillation Attack and Countermeasure on Neural Network Watermarking
Ziqi Yang
Hung Dang
E. Chang
AAML
116
34
0
14 Jun 2019
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
Kaifeng Lyu
Jian Li
128
336
0
13 Jun 2019
A Computationally Efficient Method for Defending Adversarial Deep Learning Attacks
R. Sahay
Rehana Mahfuz
Aly El Gamal
AAML
33
5
0
13 Jun 2019
Evolutionary Trigger Set Generation for DNN Black-Box Watermarking
Jiabao Guo
M. Potkonjak
AAML
WIGM
63
15
0
11 Jun 2019
Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks
Ziang Yan
Yiwen Guo
Changshui Zhang
AAML
81
111
0
11 Jun 2019
Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective
Kaidi Xu
Hongge Chen
Sijia Liu
Pin-Yu Chen
Tsui-Wei Weng
Mingyi Hong
Xue Lin
AAML
131
454
0
10 Jun 2019
E-LPIPS: Robust Perceptual Image Similarity via Random Transformation Ensembles
M. Kettunen
Erik Härkönen
J. Lehtinen
AAML
65
63
0
10 Jun 2019
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective
Lu Wang
Xuanqing Liu
Jinfeng Yi
Zhi Zhou
Cho-Jui Hsieh
AAML
83
22
0
10 Jun 2019
Robustness Verification of Tree-based Models
Hongge Chen
Huan Zhang
Si Si
Yang Li
Duane S. Boning
Cho-Jui Hsieh
AAML
103
77
0
10 Jun 2019
Attacking Graph Convolutional Networks via Rewiring
Yao Ma
Suhang Wang
Tyler Derr
Lingfei Wu
Jiliang Tang
AAML
GNN
64
84
0
10 Jun 2019
On the Vulnerability of Capsule Networks to Adversarial Attacks
Félix D. P. Michels
Tobias Uelwer
Eric Upschulte
Stefan Harmeling
AAML
77
24
0
09 Jun 2019
Adversarial Attack Generation Empowered by Min-Max Optimization
Jingkang Wang
Tianyun Zhang
Sijia Liu
Pin-Yu Chen
Jiacen Xu
M. Fardad
Yangqiu Song
AAML
83
39
0
09 Jun 2019
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
Maksym Andriushchenko
Matthias Hein
84
62
0
08 Jun 2019
ML-LOO: Detecting Adversarial Examples with Feature Attribution
Puyudi Yang
Jianbo Chen
Cho-Jui Hsieh
Jane-ling Wang
Michael I. Jordan
AAML
98
101
0
08 Jun 2019
Defending Against Universal Attacks Through Selective Feature Regeneration
Tejas S. Borkar
Felix Heide
Lina Karam
AAML
52
1
0
08 Jun 2019
Making targeted black-box evasion attacks effective and efficient
Mika Juuti
B. Atli
Nadarajah Asokan
AAML
MIACV
MLAU
49
8
0
08 Jun 2019
Efficient Project Gradient Descent for Ensemble Adversarial Attack
Fanyou Wu
R. Gazo
E. Haviarova
Bedrich Benes
AAML
33
5
0
07 Jun 2019
Robustness for Non-Parametric Classification: A Generic Attack and Defense
Yao-Yuan Yang
Cyrus Rashtchian
Yizhen Wang
Kamalika Chaudhuri
SILM
AAML
94
43
0
07 Jun 2019
Inductive Bias of Gradient Descent based Adversarial Training on Separable Data
Yan Li
Ethan X. Fang
Huan Xu
T. Zhao
92
16
0
07 Jun 2019
Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness
Walt Woods
Jack H Chen
C. Teuscher
AAML
73
46
0
07 Jun 2019
Query-efficient Meta Attack to Deep Neural Networks
Jiawei Du
Hu Zhang
Qiufeng Wang
Yi Yang
Jiashi Feng
AAML
66
84
0
06 Jun 2019
Neural SDE: Stabilizing Neural ODE Networks with Stochastic Noise
Xuanqing Liu
Tesi Xiao
Si Si
Qin Cao
Sanjiv Kumar
Cho-Jui Hsieh
114
138
0
05 Jun 2019
Previous
1
2
3
...
68
69
70
...
79
80
81
Next