Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1608.04644
Cited By
Towards Evaluating the Robustness of Neural Networks
16 August 2016
Nicholas Carlini
D. Wagner
OOD
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards Evaluating the Robustness of Neural Networks"
50 / 1,673 papers shown
Title
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Pu Zhao
Pin-Yu Chen
Payel Das
Karthikeyan N. Ramamurthy
Xue Lin
AAML
64
185
0
30 Apr 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
46
371
0
30 Apr 2020
Minority Reports Defense: Defending Against Adversarial Patches
Michael McCoyd
Won Park
Steven Chen
Neil Shah
Ryan Roggenkemper
Minjune Hwang
J. Liu
David Wagner
AAML
11
54
0
28 Apr 2020
Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks
Pranjal Awasthi
Natalie Frank
M. Mohri
AAML
36
56
0
28 Apr 2020
Transferable Perturbations of Deep Feature Distributions
Nathan Inkawhich
Kevin J Liang
Lawrence Carin
Yiran Chen
AAML
30
84
0
27 Apr 2020
Towards Feature Space Adversarial Attack
Qiuling Xu
Guanhong Tao
Shuyang Cheng
Xinming Zhang
GAN
AAML
25
25
0
26 Apr 2020
Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of Uncertainty
Xiyue Zhang
Xiaofei Xie
Lei Ma
Xiaoning Du
Q. Hu
Yang Liu
Jianjun Zhao
Meng Sun
AAML
16
76
0
24 Apr 2020
Ensemble Generative Cleaning with Feedback Loops for Defending Adversarial Attacks
Jianhe Yuan
Zhihai He
AAML
32
22
0
23 Apr 2020
Improved Noise and Attack Robustness for Semantic Segmentation by Using Multi-Task Training with Self-Supervised Depth Estimation
Marvin Klingner
Andreas Bär
Tim Fingscheidt
AAML
32
40
0
23 Apr 2020
Scalable Attack on Graph Data by Injecting Vicious Nodes
Jihong Wang
Minnan Luo
Fnu Suya
Jundong Li
Z. Yang
Q. Zheng
AAML
GNN
27
87
0
22 Apr 2020
Certifying Joint Adversarial Robustness for Model Ensembles
M. Jonas
David Evans
AAML
21
2
0
21 Apr 2020
Single-step Adversarial training with Dropout Scheduling
S. VivekB.
R. Venkatesh Babu
OOD
AAML
18
71
0
18 Apr 2020
PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning
Chenglin Yang
Adam Kortylewski
Cihang Xie
Yinzhi Cao
Alan Yuille
AAML
45
109
0
12 Apr 2020
Reciprocal Learning Networks for Human Trajectory Prediction
Hao Sun
Zhiqun Zhao
Zhihai He
21
56
0
09 Apr 2020
Towards Evaluating the Robustness of Chinese BERT Classifiers
Wei Ping
Boyuan Pan
Xin Li
Bo-wen Li
AAML
34
8
0
07 Apr 2020
Learning to fool the speaker recognition
Jiguo Li
Xinfeng Zhang
Jizheng Xu
Li Zhang
Y. Wang
Siwei Ma
Wen Gao
AAML
30
21
0
07 Apr 2020
Universal Adversarial Perturbations Generative Network for Speaker Recognition
Jiguo Li
Xinfeng Zhang
Chuanmin Jia
Jizheng Xu
Li Zhang
Y. Wang
Siwei Ma
Wen Gao
AAML
28
45
0
07 Apr 2020
DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips
Fan Yao
Adnan Siraj Rakin
Deliang Fan
AAML
18
155
0
30 Mar 2020
DaST: Data-free Substitute Training for Adversarial Attacks
Mingyi Zhou
Jing Wu
Yipeng Liu
Shuaicheng Liu
Ce Zhu
25
142
0
28 Mar 2020
Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study
Luan Nguyen
Sunpreet S. Arora
Yuhang Wu
Hao Yang
AAML
25
88
0
24 Mar 2020
Adversarial Perturbations Fool Deepfake Detectors
Apurva Gandhi
Shomik Jain
AAML
16
103
0
24 Mar 2020
Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations
Saima Sharmin
Nitin Rathi
Priyadarshini Panda
Kaushik Roy
AAML
116
86
0
23 Mar 2020
Adversarial Attacks on Monocular Depth Estimation
Ziqi Zhang
Xinge Zhu
Yingwei Li
Xiangqun Chen
Yao Guo
AAML
MDE
30
25
0
23 Mar 2020
DP-Net: Dynamic Programming Guided Deep Neural Network Compression
Dingcheng Yang
Wenjian Yu
Ao Zhou
Haoyuan Mu
G. Yao
Xiaoyi Wang
21
6
0
21 Mar 2020
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
75
99
0
20 Mar 2020
Face-Off: Adversarial Face Obfuscation
Varun Chandrasekaran
Chuhan Gao
Brian Tang
Kassem Fawaz
S. Jha
Suman Banerjee
PICV
24
44
0
19 Mar 2020
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
Erwin Quiring
Konrad Rieck
AAML
54
70
0
19 Mar 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
51
82
0
17 Mar 2020
Toward Adversarial Robustness via Semi-supervised Robust Training
Yiming Li
Baoyuan Wu
Yan Feng
Yanbo Fan
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
87
13
0
16 Mar 2020
Diversity can be Transferred: Output Diversification for White- and Black-box Attacks
Y. Tashiro
Yang Song
Stefano Ermon
AAML
14
13
0
15 Mar 2020
GeoDA: a geometric framework for black-box adversarial attacks
A. Rahmati
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
H. Dai
MLAU
AAML
31
114
0
13 Mar 2020
When are Non-Parametric Methods Robust?
Robi Bhattacharjee
Kamalika Chaudhuri
AAML
44
27
0
13 Mar 2020
Topological Effects on Attacks Against Vertex Classification
B. A. Miller
Mustafa Çamurcu
Alexander J. Gomez
Kevin S. Chan
Tina Eliassi-Rad
AAML
19
2
0
12 Mar 2020
Generating Natural Language Adversarial Examples on a Large Scale with Generative Models
Yankun Ren
J. Lin
Siliang Tang
Jun Zhou
Shuang Yang
Yuan Qi
Xiang Ren
GAN
AAML
SILM
32
21
0
10 Mar 2020
Adversarial Attacks on Probabilistic Autoregressive Forecasting Models
Raphaël Dang-Nhu
Gagandeep Singh
Pavol Bielik
Martin Vechev
AI4TS
AAML
39
20
0
08 Mar 2020
Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles
Ranjie Duan
Xingjun Ma
Yisen Wang
James Bailey
•. A. K. Qin
Yun Yang
AAML
167
224
0
08 Mar 2020
On the Robustness of Cooperative Multi-Agent Reinforcement Learning
Jieyu Lin
Kristina Dzeparoska
Shanghang Zhang
A. Leon-Garcia
Nicolas Papernot
AAML
74
65
0
08 Mar 2020
Dynamic Backdoor Attacks Against Machine Learning Models
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
AAML
48
271
0
07 Mar 2020
The Variational InfoMax Learning Objective
Vincenzo Crescimanna
Bruce P. Graham
23
0
0
07 Mar 2020
Exploiting Verified Neural Networks via Floating Point Numerical Error
Kai Jia
Martin Rinard
AAML
37
34
0
06 Mar 2020
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Bai Li
Shiqi Wang
Yunhan Jia
Yantao Lu
Zhenyu Zhong
Lawrence Carin
Suman Jana
AAML
31
14
0
06 Mar 2020
Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization
Saehyung Lee
Hyungyu Lee
Sungroh Yoon
AAML
163
113
0
05 Mar 2020
Validation of Image-Based Neural Network Controllers through Adaptive Stress Testing
Kyle D. Julian
Ritchie Lee
Mykel J. Kochenderfer
17
32
0
05 Mar 2020
Deep Neural Network Perception Models and Robust Autonomous Driving Systems
M. Shafiee
Ahmadreza Jeddi
Amir Nazemi
Paul Fieguth
A. Wong
OOD
34
15
0
04 Mar 2020
Analyzing Accuracy Loss in Randomized Smoothing Defenses
Yue Gao
Harrison Rosenberg
Kassem Fawaz
S. Jha
Justin Hsu
AAML
24
6
0
03 Mar 2020
Disrupting Deepfakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems
Nataniel Ruiz
Sarah Adel Bargal
Stan Sclaroff
PICV
AAML
19
119
0
03 Mar 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OOD
AAML
72
63
0
02 Mar 2020
Utilizing Network Properties to Detect Erroneous Inputs
Matt Gorbett
Nathaniel Blanchard
AAML
23
6
0
28 Feb 2020
Testing Monotonicity of Machine Learning Models
Arnab Sharma
Heike Wehrheim
14
8
0
27 Feb 2020
On Isometry Robustness of Deep 3D Point Cloud Models under Adversarial Attacks
Yue Zhao
Yuwei Wu
Caihua Chen
A. Lim
3DPC
16
70
0
27 Feb 2020
Previous
1
2
3
...
23
24
25
...
32
33
34
Next