Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.00420
Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"
50 / 748 papers shown
Title
Scalable Attack on Graph Data by Injecting Vicious Nodes
Jihong Wang
Minnan Luo
Fnu Suya
Jundong Li
Z. Yang
Q. Zheng
AAML
GNN
27
87
0
22 Apr 2020
Single-step Adversarial training with Dropout Scheduling
S. VivekB.
R. Venkatesh Babu
OOD
AAML
18
71
0
18 Apr 2020
Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
Michael Everett
Bjorn Lutjens
Jonathan P. How
AAML
15
41
0
11 Apr 2020
Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
Tianlong Chen
Sijia Liu
Shiyu Chang
Yu Cheng
Lisa Amini
Zhangyang Wang
AAML
18
246
0
28 Mar 2020
DaST: Data-free Substitute Training for Adversarial Attacks
Mingyi Zhou
Jing Wu
Yipeng Liu
Shuaicheng Liu
Ce Zhu
25
142
0
28 Mar 2020
Systematic Evaluation of Privacy Risks of Machine Learning Models
Liwei Song
Prateek Mittal
MIACV
196
360
0
24 Mar 2020
Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations
Saima Sharmin
Nitin Rathi
Priyadarshini Panda
Kaushik Roy
AAML
116
86
0
23 Mar 2020
Adversarial Attacks on Monocular Depth Estimation
Ziqi Zhang
Xinge Zhu
Yingwei Li
Xiangqun Chen
Yao Guo
AAML
MDE
30
25
0
23 Mar 2020
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
75
99
0
20 Mar 2020
Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates
Amin Ghiasi
Ali Shafahi
Tom Goldstein
33
55
0
19 Mar 2020
Anomalous Example Detection in Deep Learning: A Survey
Saikiran Bulusu
B. Kailkhura
Bo-wen Li
P. Varshney
D. Song
AAML
28
47
0
16 Mar 2020
Towards Face Encryption by Generating Adversarial Identity Masks
Xiao Yang
Yinpeng Dong
Tianyu Pang
Hang Su
Jun Zhu
YueFeng Chen
H. Xue
AAML
PICV
41
72
0
15 Mar 2020
Topological Effects on Attacks Against Vertex Classification
B. A. Miller
Mustafa Çamurcu
Alexander J. Gomez
Kevin S. Chan
Tina Eliassi-Rad
AAML
19
2
0
12 Mar 2020
On the Robustness of Cooperative Multi-Agent Reinforcement Learning
Jieyu Lin
Kristina Dzeparoska
Shanghang Zhang
A. Leon-Garcia
Nicolas Papernot
AAML
71
65
0
08 Mar 2020
Exploiting Verified Neural Networks via Floating Point Numerical Error
Kai Jia
Martin Rinard
AAML
37
34
0
06 Mar 2020
Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization
Saehyung Lee
Hyungyu Lee
Sungroh Yoon
AAML
163
113
0
05 Mar 2020
Deep Neural Network Perception Models and Robust Autonomous Driving Systems
M. Shafiee
Ahmadreza Jeddi
Amir Nazemi
Paul Fieguth
A. Wong
OOD
32
15
0
04 Mar 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OOD
AAML
72
63
0
02 Mar 2020
On Isometry Robustness of Deep 3D Point Cloud Models under Adversarial Attacks
Yue Zhao
Yuwei Wu
Caihua Chen
A. Lim
3DPC
16
70
0
27 Feb 2020
Improving Robustness of Deep-Learning-Based Image Reconstruction
Ankit Raj
Y. Bresler
Bo-wen Li
OOD
AAML
29
50
0
26 Feb 2020
Overfitting in adversarially robust deep learning
Leslie Rice
Eric Wong
Zico Kolter
47
787
0
26 Feb 2020
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Sanghyun Hong
Varun Chandrasekaran
Yigitcan Kaya
Tudor Dumitras
Nicolas Papernot
AAML
28
136
0
26 Feb 2020
Adversarial Ranking Attack and Defense
Mo Zhou
Zhenxing Niu
Le Wang
Qilin Zhang
G. Hua
36
38
0
26 Feb 2020
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Jingfeng Zhang
Xilie Xu
Bo Han
Gang Niu
Li-zhen Cui
Masashi Sugiyama
Mohan S. Kankanhalli
AAML
33
397
0
26 Feb 2020
Lagrangian Decomposition for Neural Network Verification
Rudy Bunel
Alessandro De Palma
Alban Desmaison
Krishnamurthy Dvijotham
Pushmeet Kohli
Philip Torr
M. P. Kumar
19
50
0
24 Feb 2020
Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework
Dinghuai Zhang
Mao Ye
Chengyue Gong
Zhanxing Zhu
Qiang Liu
AAML
24
62
0
21 Feb 2020
On Adaptive Attacks to Adversarial Example Defenses
Florian Tramèr
Nicholas Carlini
Wieland Brendel
A. Madry
AAML
106
823
0
19 Feb 2020
Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks
Tsubasa Takahashi
GNN
AAML
19
37
0
19 Feb 2020
Deflecting Adversarial Attacks
Yao Qin
Nicholas Frosst
Colin Raffel
G. Cottrell
Geoffrey E. Hinton
AAML
30
15
0
18 Feb 2020
CAT: Customized Adversarial Training for Improved Robustness
Minhao Cheng
Qi Lei
Pin-Yu Chen
Inderjit Dhillon
Cho-Jui Hsieh
OOD
AAML
35
114
0
17 Feb 2020
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality
Yi Zhang
Orestis Plevrakis
S. Du
Xingguo Li
Zhao Song
Sanjeev Arora
29
51
0
16 Feb 2020
CEB Improves Model Robustness
Ian S. Fischer
Alexander A. Alemi
AAML
19
28
0
13 Feb 2020
The Conditional Entropy Bottleneck
Ian S. Fischer
OOD
29
116
0
13 Feb 2020
More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models
Lin Chen
Yifei Min
Mingrui Zhang
Amin Karbasi
OOD
38
64
0
11 Feb 2020
Robustness of Bayesian Neural Networks to Gradient-Based Attacks
Ginevra Carbone
Matthew Wicker
Luca Laurenti
A. Patané
Luca Bortolussi
G. Sanguinetti
AAML
38
77
0
11 Feb 2020
Random Smoothing Might be Unable to Certify
ℓ
∞
\ell_\infty
ℓ
∞
Robustness for High-Dimensional Images
Avrim Blum
Travis Dick
N. Manoj
Hongyang R. Zhang
AAML
31
79
0
10 Feb 2020
Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing
Jinyuan Jia
Binghui Wang
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
85
83
0
09 Feb 2020
Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness
Aounon Kumar
Alexander Levine
Tom Goldstein
S. Feizi
15
94
0
08 Feb 2020
Analysis of Random Perturbations for Robust Convolutional Neural Networks
Adam Dziedzic
S. Krishnan
OOD
AAML
24
1
0
08 Feb 2020
Semantic Robustness of Models of Source Code
Goutham Ramakrishnan
Jordan Henkel
Zi Wang
Aws Albarghouthi
S. Jha
Thomas W. Reps
SILM
AAML
47
97
0
07 Feb 2020
On the Robustness of Face Recognition Algorithms Against Attacks and Bias
Richa Singh
Akshay Agarwal
Maneet Singh
Shruti Nagpal
Mayank Vatsa
CVBM
AAML
59
65
0
07 Feb 2020
Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study
David Mickisch
F. Assion
Florens Greßner
W. Günther
M. Motta
AAML
19
34
0
05 Feb 2020
Minimax Defense against Gradient-based Adversarial Attacks
Blerta Lindqvist
R. Izmailov
AAML
19
0
0
04 Feb 2020
Towards Sharper First-Order Adversary with Quantized Gradients
Zhuanghua Liu
Ivor W. Tsang
AAML
22
0
0
01 Feb 2020
Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks
Oliver Willers
Sebastian Sudholt
Shervin Raafatnia
Stephanie Abrecht
28
80
0
22 Jan 2020
A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications
Jie Gui
Zhenan Sun
Yonggang Wen
Dacheng Tao
Jieping Ye
EGVM
33
821
0
20 Jan 2020
Fast is better than free: Revisiting adversarial training
Eric Wong
Leslie Rice
J. Zico Kolter
AAML
OOD
99
1,159
0
12 Jan 2020
MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius
Runtian Zhai
Chen Dan
Di He
Huan Zhang
Boqing Gong
Pradeep Ravikumar
Cho-Jui Hsieh
Liwei Wang
OOD
AAML
27
177
0
08 Jan 2020
Empirical Studies on the Properties of Linear Regions in Deep Neural Networks
Xiao Zhang
Dongrui Wu
21
38
0
04 Jan 2020
Adversarial Example Generation using Evolutionary Multi-objective Optimization
Takahiro Suzuki
Shingo Takeshita
S. Ono
AAML
19
22
0
30 Dec 2019
Previous
1
2
3
...
10
11
12
13
14
15
Next