Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.00420
Cited By
v1
v2
v3
v4 (latest)
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"
50 / 1,929 papers shown
Title
Deep Neural Rejection against Adversarial Examples
Angelo Sotgiu
Ambra Demontis
Marco Melis
Battista Biggio
Giorgio Fumera
Xiaoyi Feng
Fabio Roli
AAML
88
69
0
01 Oct 2019
Role of Spatial Context in Adversarial Robustness for Object Detection
Aniruddha Saha
Akshayvarun Subramanya
Koninika Patil
Hamed Pirsiavash
ObjD
AAML
112
54
0
30 Sep 2019
Deep k-NN Defense against Clean-label Data Poisoning Attacks
Neehar Peri
Neal Gupta
Wenjie Huang
Liam H. Fowl
Chen Zhu
Soheil Feizi
Tom Goldstein
John P. Dickerson
AAML
71
6
0
29 Sep 2019
Test-Time Training with Self-Supervision for Generalization under Distribution Shifts
Yu Sun
Xiaolong Wang
Zhuang Liu
John Miller
Alexei A. Efros
Moritz Hardt
TTA
OOD
99
96
0
29 Sep 2019
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks
Rémi Bernhard
Pierre-Alain Moëllic
J. Dutertre
AAML
MQ
98
18
0
27 Sep 2019
Lower Bounds on Adversarial Robustness from Optimal Transport
A. Bhagoji
Daniel Cullina
Prateek Mittal
OOD
OT
AAML
72
94
0
26 Sep 2019
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
Chen Zhu
Yu Cheng
Zhe Gan
S. Sun
Tom Goldstein
Jingjing Liu
AAML
296
443
0
25 Sep 2019
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
Tianyu Pang
Kun Xu
Jun Zhu
AAML
91
105
0
25 Sep 2019
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
Minhao Cheng
Simranjit Singh
Patrick H. Chen
Pin-Yu Chen
Sijia Liu
Cho-Jui Hsieh
AAML
237
224
0
24 Sep 2019
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
Jinyuan Jia
Ahmed Salem
Michael Backes
Yang Zhang
Neil Zhenqiang Gong
108
398
0
23 Sep 2019
FENCE: Feasible Evasion Attacks on Neural Networks in Constrained Environments
Alesia Chernikova
Alina Oprea
AAML
121
40
0
23 Sep 2019
Robust Local Features for Improving the Generalization of Adversarial Training
Chuanbiao Song
Kun He
Jiadong Lin
Liwei Wang
John E. Hopcroft
OOD
AAML
75
70
0
23 Sep 2019
Defending Against Physically Realizable Attacks on Image Classification
Tong Wu
Liang Tong
Yevgeniy Vorobeychik
AAML
84
127
0
20 Sep 2019
Training Robust Deep Neural Networks via Adversarial Noise Propagation
Aishan Liu
Xianglong Liu
Chongzhi Zhang
Hang Yu
Qiang Liu
Dacheng Tao
AAML
86
116
0
19 Sep 2019
Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Sekitoshi Kanai
Yasutoshi Ida
Yasuhiro Fujiwara
Masanori Yamada
S. Adachi
AAML
44
1
0
19 Sep 2019
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
Han Xu
Yao Ma
Haochen Liu
Debayan Deb
Hui Liu
Jiliang Tang
Anil K. Jain
AAML
79
680
0
17 Sep 2019
HAD-GAN: A Human-perception Auxiliary Defense GAN to Defend Adversarial Examples
Wanting Yu
Hongyi Yu
Lingyun Jiang
Mengli Zhang
Kai Qiao
GAN
AAML
27
0
0
17 Sep 2019
Interpreting and Improving Adversarial Robustness of Deep Neural Networks with Neuron Sensitivity
Chongzhi Zhang
Aishan Liu
Xianglong Liu
Yitao Xu
Hang Yu
Yuqing Ma
Tianlin Li
AAML
134
19
0
16 Sep 2019
Natural Language Adversarial Defense through Synonym Encoding
Xiaosen Wang
Hao Jin
Yichen Yang
Kun He
AAML
91
64
0
15 Sep 2019
White-Box Adversarial Defense via Self-Supervised Data Estimation
Zudi Lin
Hanspeter Pfister
Ziming Zhang
AAML
23
2
0
13 Sep 2019
Towards Model-Agnostic Adversarial Defenses using Adversarially Trained Autoencoders
Pratik Vaishnavi
Kevin Eykholt
A. Prakash
Amir Rahmati
AAML
46
2
0
12 Sep 2019
Feedback Learning for Improving the Robustness of Neural Networks
Chang Song
Zuoguan Wang
H. Li
AAML
65
7
0
12 Sep 2019
Sparse and Imperceivable Adversarial Attacks
Francesco Croce
Matthias Hein
AAML
110
199
0
11 Sep 2019
PDA: Progressive Data Augmentation for General Robustness of Deep Neural Networks
Hang Yu
Aishan Liu
Xianglong Liu
Gen Li
Ping Luo
R. Cheng
Jichen Yang
Chongzhi Zhang
AAML
77
10
0
11 Sep 2019
Identifying and Resisting Adversarial Videos Using Temporal Consistency
Xiaojun Jia
Xingxing Wei
Xiaochun Cao
AAML
42
15
0
11 Sep 2019
Neural Belief Reasoner
Haifeng Qian
NAI
BDL
26
1
0
10 Sep 2019
FDA: Feature Disruptive Attack
Aditya Ganeshan
S. VivekB.
R. Venkatesh Babu
AAML
118
105
0
10 Sep 2019
Learning to Disentangle Robust and Vulnerable Features for Adversarial Detection
Byunggill Joe
Sung Ju Hwang
I. Shin
AAML
35
1
0
10 Sep 2019
Adversarial Robustness Against the Union of Multiple Perturbation Models
Pratyush Maini
Eric Wong
J. Zico Kolter
OOD
AAML
65
151
0
09 Sep 2019
On the Need for Topology-Aware Generative Models for Manifold-Based Defenses
Uyeong Jang
Susmit Jha
S. Jha
AAML
70
13
0
07 Sep 2019
Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation
Po-Sen Huang
Robert Stanforth
Johannes Welbl
Chris Dyer
Dani Yogatama
Sven Gowal
Krishnamurthy Dvijotham
Pushmeet Kohli
AAML
112
166
0
03 Sep 2019
Certified Robustness to Adversarial Word Substitutions
Robin Jia
Aditi Raghunathan
Kerem Göksel
Percy Liang
AAML
351
294
0
03 Sep 2019
Metric Learning for Adversarial Robustness
Chengzhi Mao
Ziyuan Zhong
Junfeng Yang
Carl Vondrick
Baishakhi Ray
OOD
96
188
0
03 Sep 2019
The many faces of deep learning
Raul Vicente
FedML
AI4CE
50
0
0
25 Aug 2019
Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
Dou Goodman
Xingjian Li
Ji Liu
Jun Huan
Tao Wei
AAML
43
7
0
23 Aug 2019
Testing Robustness Against Unforeseen Adversaries
Maximilian Kaufmann
Daniel Kang
Yi Sun
Steven Basart
Xuwang Yin
...
Adam Dziedzic
Franziska Boenisch
Tom B. Brown
Jacob Steinhardt
Dan Hendrycks
AAML
32
0
0
21 Aug 2019
Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses
Tianlin Li
Siyue Wang
Pin-Yu Chen
Yanzhi Wang
Brian Kulis
Xue Lin
S. Chin
AAML
62
42
0
20 Aug 2019
Adversarial Defense by Suppressing High-frequency Components
Zhendong Zhang
Cheolkon Jung
X. Liang
83
24
0
19 Aug 2019
Verification of Neural Network Control Policy Under Persistent Adversarial Perturbation
Yuh-Shyang Wang
Tsui-Wei Weng
Luca Daniel
AAML
57
16
0
18 Aug 2019
Implicit Deep Learning
L. Ghaoui
Fangda Gu
Bertrand Travacca
Armin Askari
Alicia Y. Tsai
AI4CE
64
182
0
17 Aug 2019
Adversarial Neural Pruning with Latent Vulnerability Suppression
Divyam Madaan
Jinwoo Shin
Sung Ju Hwang
AAML
18
3
0
12 Aug 2019
BlurNet: Defense by Filtering the Feature Maps
Ravi Raju
Mikko H. Lipasti
AAML
69
16
0
06 Aug 2019
MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks
Chen Ma
Chenxu Zhao
Hailin Shi
Li Chen
Junhai Yong
Dan Zeng
AAML
55
17
0
06 Aug 2019
A principled approach for generating adversarial images under non-smooth dissimilarity metrics
Aram-Alexandre Pooladian
Chris Finlay
Tim Hoheisel
Adam M. Oberman
AAML
54
3
0
05 Aug 2019
Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding
Utku Ozbulak
Arnout Van Messem
W. D. Neve
AAML
34
1
0
30 Jul 2019
Are Odds Really Odd? Bypassing Statistical Detection of Adversarial Examples
Hossein Hosseini
Sreeram Kannan
Radha Poovendran
AAML
70
19
0
28 Jul 2019
Understanding Adversarial Robustness: The Trade-off between Minimum and Average Margin
Kaiwen Wu
Yaoliang Yu
AAML
45
8
0
26 Jul 2019
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
Haichao Zhang
Jianyu Wang
AAML
112
231
0
24 Jul 2019
Joint Adversarial Training: Incorporating both Spatial and Pixel Attacks
Haichao Zhang
Jianyu Wang
72
4
0
24 Jul 2019
Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems
Xingjun Ma
Yuhao Niu
Lin Gu
Yisen Wang
Yitian Zhao
James Bailey
Feng Lu
MedIm
AAML
93
459
0
24 Jul 2019
Previous
1
2
3
...
30
31
32
...
37
38
39
Next