Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.00420
Cited By
v1
v2
v3
v4 (latest)
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"
50 / 1,929 papers shown
Title
Understanding the Error in Evaluating Adversarial Robustness
Pengfei Xia
Ziqiang Li
Hongjing Niu
Bin Li
AAML
ELM
76
5
0
07 Jan 2021
Adversarial Robustness by Design through Analog Computing and Synthetic Gradients
Alessandro Cappelli
Ruben Ohana
Julien Launay
Laurent Meunier
Iacopo Poli
Florent Krzakala
AAML
131
13
0
06 Jan 2021
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Mohamed Bennai
Mahum Naseer
T. Theocharides
C. Kyrkou
O. Mutlu
Lois Orosa
Jungwook Choi
OOD
139
101
0
04 Jan 2021
Improving Adversarial Robustness in Weight-quantized Neural Networks
Chang Song
Elias Fallon
Hai Helen Li
AAML
61
19
0
29 Dec 2020
A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via Adversarial Fine-tuning
Ahmadreza Jeddi
M. Shafiee
A. Wong
AAML
84
40
0
25 Dec 2020
Understanding and Increasing Efficiency of Frank-Wolfe Adversarial Training
Theodoros Tsiligkaridis
Jay Roberts
AAML
206
11
0
22 Dec 2020
Discovering Robust Convolutional Architecture at Targeted Capacity: A Multi-Shot Approach
Xuefei Ning
Jiaqi Zhao
Wenshuo Li
Tianchen Zhao
Yin Zheng
Huazhong Yang
Yu Wang
AAML
95
5
0
22 Dec 2020
Self-Progressing Robust Training
Minhao Cheng
Pin-Yu Chen
Sijia Liu
Shiyu Chang
Cho-Jui Hsieh
Payel Das
AAML
VLM
74
9
0
22 Dec 2020
On Success and Simplicity: A Second Look at Transferable Targeted Attacks
Zhengyu Zhao
Zhuoran Liu
Martha Larson
AAML
167
126
0
21 Dec 2020
RAILS: A Robust Adversarial Immune-inspired Learning System
Ren Wang
Tianqi Chen
Stephen Lindsly
A. Rehemtulla
Alfred Hero
I. Rajapakse
AAML
43
7
0
18 Dec 2020
On the human-recognizability phenomenon of adversarially trained deep image classifiers
Jonathan W. Helland
Nathan M. VanHoudnos
AAML
54
4
0
18 Dec 2020
A Hierarchical Feature Constraint to Camouflage Medical Adversarial Attacks
Qingsong Yao
Zecheng He
Yi Lin
Kai Ma
Yefeng Zheng
S. Kevin Zhou
AAML
MedIm
109
16
0
17 Dec 2020
Characterizing the Evasion Attackability of Multi-label Classifiers
Zhuo Yang
Yufei Han
Xiangliang Zhang
AAML
45
10
0
17 Dec 2020
Incentivizing Truthfulness Through Audits in Strategic Classification
Andrew Estornell
Sanmay Das
Yevgeniy Vorobeychik
MLAU
37
9
0
16 Dec 2020
A Closer Look at the Robustness of Vision-and-Language Pre-trained Models
Linjie Li
Zhe Gan
Jingjing Liu
VLM
96
44
0
15 Dec 2020
FoggySight: A Scheme for Facial Lookup Privacy
Ivan Evtimov
Pascal Sturmfels
Tadayoshi Kohno
PICV
FedML
76
24
0
15 Dec 2020
Adaptive Verifiable Training Using Pairwise Class Similarity
Shiqi Wang
Kevin Eykholt
Taesung Lee
Jiyong Jang
Ian Molloy
OOD
33
1
0
14 Dec 2020
Improving Adversarial Robustness via Probabilistically Compact Loss with Logit Constraints
X. Li
Xiangrui Li
Deng Pan
D. Zhu
AAML
71
17
0
14 Dec 2020
Generating Out of Distribution Adversarial Attack using Latent Space Poisoning
Ujjwal Upadhyay
Prerana Mukherjee
78
7
0
09 Dec 2020
Reinforcement Based Learning on Classification Task Could Yield Better Generalization and Adversarial Accuracy
Shashi Kant Gupta
OOD
27
3
0
08 Dec 2020
Data-Dependent Randomized Smoothing
Motasem Alfarra
Adel Bibi
Philip Torr
Guohao Li
UQCV
110
35
0
08 Dec 2020
Overcomplete Representations Against Adversarial Videos
Shao-Yuan Lo
Jeya Maria Jose Valanarasu
Vishal M. Patel
AAML
77
8
0
08 Dec 2020
Backpropagating Linearly Improves Transferability of Adversarial Examples
Yiwen Guo
Qizhang Li
Hao Chen
FedML
AAML
82
117
0
07 Dec 2020
Learning to Separate Clusters of Adversarial Representations for Robust Adversarial Detection
Byunggill Joe
Jihun Hamm
Sung Ju Hwang
Sooel Son
I. Shin
AAML
OOD
57
0
0
07 Dec 2020
Evaluating adversarial robustness in simulated cerebellum
Liu Yuezhang
Bo Li
Qifeng Chen
AAML
15
0
0
05 Dec 2020
Advocating for Multiple Defense Strategies against Adversarial Examples
Alexandre Araujo
Laurent Meunier
Rafael Pinot
Benjamin Négrevergne
AAML
46
9
0
04 Dec 2020
Practical No-box Adversarial Attacks against DNNs
Qizhang Li
Yiwen Guo
Hao Chen
AAML
79
59
0
04 Dec 2020
FAT: Federated Adversarial Training
Giulio Zizzo
Ambrish Rawat
M. Sinn
Beat Buesser
FedML
89
43
0
03 Dec 2020
FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation Techniques
Han Qiu
Yi Zeng
Tianwei Zhang
Yong Jiang
Meikang Qiu
AAML
44
15
0
03 Dec 2020
Content-Adaptive Pixel Discretization to Improve Model Robustness
Ryan Feng
Wu-chi Feng
Atul Prakash
AAML
37
0
0
03 Dec 2020
Interpretable Graph Capsule Networks for Object Recognition
Jindong Gu
Volker Tresp
FAtt
73
36
0
03 Dec 2020
Towards Defending Multiple
ℓ
p
\ell_p
ℓ
p
-norm Bounded Adversarial Perturbations via Gated Batch Normalization
Aishan Liu
Shiyu Tang
Xinyun Chen
Lei Huang
Zhuozhuo Tu
Xianglong Liu
Dacheng Tao
AAML
110
35
0
03 Dec 2020
From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation
Nikhil Kapoor
Andreas Bär
Serin Varghese
Jan David Schneider
Fabian Hüger
Peter Schlicht
Tim Fingscheidt
AAML
74
10
0
02 Dec 2020
How Robust are Randomized Smoothing based Defenses to Data Poisoning?
Akshay Mehra
B. Kailkhura
Pin-Yu Chen
Jihun Hamm
OOD
AAML
113
32
0
02 Dec 2020
Adversarial Robustness Across Representation Spaces
Pranjal Awasthi
George Yu
Chun-Sung Ferng
Andrew Tomkins
Da-Cheng Juan
OOD
AAML
85
11
0
01 Dec 2020
Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses
Gaurang Sriramanan
Sravanti Addepalli
Arya Baburaj
R. Venkatesh Babu
AAML
82
95
0
30 Nov 2020
Robust and Private Learning of Halfspaces
Badih Ghazi
Ravi Kumar
Pasin Manurangsi
Thao Nguyen
86
12
0
30 Nov 2020
FaceGuard: A Self-Supervised Defense Against Adversarial Face Images
Debayan Deb
Xiaoming Liu
Anil K. Jain
CVBM
AAML
PICV
98
27
0
28 Nov 2020
Deterministic Certification to Adversarial Attacks via Bernstein Polynomial Approximation
Ching-Chia Kao
Jhe-Bang Ko
Chun-Shien Lu
AAML
57
1
0
28 Nov 2020
Incorporating Hidden Layer representation into Adversarial Attacks and Defences
Haojing Shen
Sihong Chen
Ran Wang
Xizhao Wang
AAML
61
0
0
28 Nov 2020
Voting based ensemble improves robustness of defensive models
Devvrit
Minhao Cheng
Cho-Jui Hsieh
Inderjit Dhillon
OOD
FedML
AAML
73
12
0
28 Nov 2020
A Study on the Uncertainty of Convolutional Layers in Deep Neural Networks
Hao Shen
Sihong Chen
Ran Wang
70
5
0
27 Nov 2020
Use the Spear as a Shield: A Novel Adversarial Example based Privacy-Preserving Technique against Membership Inference Attacks
Mingfu Xue
Chengxiang Yuan
Can He
Zhiyu Wu
Yushu Zhang
Yanfeng Guo
Weiqiang Liu
MIACV
16
12
0
27 Nov 2020
Rethinking Uncertainty in Deep Learning: Whether and How it Improves Robustness
Yilun Jin
Lixin Fan
Kam Woh Ng
Ce Ju
Qiang Yang
AAML
OOD
27
1
0
27 Nov 2020
Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory Architectures to Adversarial Attacks in Deep Neural Networks
Abhishek Moitra
Priyadarshini Panda
AAML
52
2
0
26 Nov 2020
Adversarial Evaluation of Multimodal Models under Realistic Gray Box Assumption
Ivan Evtimov
Russ Howes
Brian Dolhansky
Hamed Firooz
Cristian Canton Ferrer
AAML
49
10
0
25 Nov 2020
On Adversarial Robustness of 3D Point Cloud Classification under Adaptive Attacks
Jiachen Sun
Karl Koenig
Yulong Cao
Qi Alfred Chen
Z. Morley Mao
3DPC
92
20
0
24 Nov 2020
Omni: Automated Ensemble with Unexpected Models against Adversarial Evasion Attack
Rui Shu
Tianpei Xia
Laurie A. Williams
Tim Menzies
AAML
70
16
0
23 Nov 2020
Learnable Boundary Guided Adversarial Training
Jiequan Cui
Shu Liu
Liwei Wang
Jiaya Jia
OOD
AAML
113
132
0
23 Nov 2020
A Neuro-Inspired Autoencoding Defense Against Adversarial Perturbations
Can Bakiskan
Metehan Cekic
Ahmet Dundar Sezer
Upamanyu Madhow
AAML
52
0
0
21 Nov 2020
Previous
1
2
3
...
21
22
23
...
37
38
39
Next