Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.00420
Cited By
v1
v2
v3
v4 (latest)
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"
50 / 1,929 papers shown
Title
Multi-Step Adversarial Perturbations on Recommender Systems Embeddings
Vito Walter Anelli
Alejandro Bellogín
Yashar Deldjoo
Tommaso Di Noia
Felice Antonio Merra
AAML
25
5
0
03 Oct 2020
Do Wider Neural Networks Really Help Adversarial Robustness?
Boxi Wu
Jinghui Chen
Deng Cai
Xiaofei He
Quanquan Gu
AAML
112
95
0
03 Oct 2020
Efficient Robust Training via Backward Smoothing
Jinghui Chen
Yu Cheng
Zhe Gan
Quanquan Gu
Jingjing Liu
AAML
83
40
0
03 Oct 2020
Interpreting Robust Optimization via Adversarial Influence Functions
Zhun Deng
Cynthia Dwork
Jialiang Wang
Linjun Zhang
TDI
49
12
0
03 Oct 2020
Query complexity of adversarial attacks
Grzegorz Gluch
R. Urbanke
AAML
67
5
0
02 Oct 2020
An alternative proof of the vulnerability of retrieval in high intrinsic dimensionality neighborhood
Teddy Furon
AAML
393
0
0
02 Oct 2020
An Empirical Study of DNNs Robustification Inefficacy in Protecting Visual Recommenders
Vito Walter Anelli
Tommaso Di Noia
Daniele Malitesta
Felice Antonio Merra
AAML
34
2
0
02 Oct 2020
Block-wise Image Transformation with Secret Key for Adversarially Robust Defense
Maungmaung Aprilpyone
Hitoshi Kiya
76
57
0
02 Oct 2020
Bag of Tricks for Adversarial Training
Tianyu Pang
Xiao Yang
Yinpeng Dong
Hang Su
Jun Zhu
AAML
90
270
0
01 Oct 2020
Erratum Concerning the Obfuscated Gradients Attack on Stochastic Activation Pruning
Guneet Singh Dhillon
Nicholas Carlini
AAML
29
1
0
30 Sep 2020
DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles
Huanrui Yang
Jingyang Zhang
Hongliang Dong
Nathan Inkawhich
Andrew B. Gardner
Andrew Touchet
Wesley Wilkes
Heath Berry
H. Li
AAML
83
109
0
30 Sep 2020
Adversarial Robustness of Stabilized NeuralODEs Might be from Obfuscated Gradients
Yifei Huang
Yaodong Yu
Hongyang R. Zhang
Yi-An Ma
Yuan Yao
AAML
84
27
0
28 Sep 2020
A Unifying Review of Deep and Shallow Anomaly Detection
Lukas Ruff
Jacob R. Kauffmann
Robert A. Vandermeulen
G. Montavon
Wojciech Samek
Marius Kloft
Thomas G. Dietterich
Klaus-Robert Muller
UQCV
150
806
0
24 Sep 2020
Adversarial robustness via stochastic regularization of neural activation sensitivity
Gil Fidel
Ron Bitton
Ziv Katzir
A. Shabtai
AAML
35
1
0
23 Sep 2020
Semantics-Preserving Adversarial Training
Won-Ok Lee
Hanbit Lee
Sang-goo Lee
AAML
34
2
0
23 Sep 2020
Optimal Provable Robustness of Quantum Classification via Quantum Hypothesis Testing
Maurice Weber
Nana Liu
Yue Liu
Ce Zhang
Zhikuan Zhao
AAML
83
32
0
21 Sep 2020
Feature Distillation With Guided Adversarial Contrastive Learning
Tao Bai
Jinnan Chen
Jun Zhao
Bihan Wen
Xudong Jiang
Alex C. Kot
AAML
65
9
0
21 Sep 2020
Improving Ensemble Robustness by Collaboratively Promoting and Demoting Adversarial Robustness
Tuan-Anh Bui
Trung Le
He Zhao
Paul Montague
O. deVel
Tamas Abraham
Dinh Q. Phung
AAML
FedML
73
11
0
21 Sep 2020
Adversarial Training with Stochastic Weight Average
Joong-won Hwang
Youngwan Lee
Sungchan Oh
Yuseok Bae
OOD
AAML
65
11
0
21 Sep 2020
Efficient Certification of Spatial Robustness
Anian Ruoss
Maximilian Baader
Mislav Balunović
Martin Vechev
AAML
75
26
0
19 Sep 2020
Online Alternate Generator against Adversarial Attacks
Haofeng Li
Yirui Zeng
Guanbin Li
Liang Lin
Yizhou Yu
AAML
69
6
0
17 Sep 2020
Certifying Confidence via Randomized Smoothing
Aounon Kumar
Alexander Levine
Soheil Feizi
Tom Goldstein
UQCV
98
40
0
17 Sep 2020
Malicious Network Traffic Detection via Deep Learning: An Information Theoretic View
Erick Galinkin
AAML
47
0
0
16 Sep 2020
Switching Transferable Gradient Directions for Query-Efficient Black-Box Adversarial Attacks
Chen Ma
Shuyu Cheng
Li Chen
Jun Zhu
Junhai Yong
AAML
50
7
0
15 Sep 2020
Decision-based Universal Adversarial Attack
Jing Wu
Mingyi Zhou
Shuaicheng Liu
Yipeng Liu
Ce Zhu
AAML
80
13
0
15 Sep 2020
Input Hessian Regularization of Neural Networks
Waleed Mustafa
Robert A. Vandermeulen
Marius Kloft
AAML
54
12
0
14 Sep 2020
A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses
Ambar Pal
René Vidal
AAML
106
27
0
14 Sep 2020
Defending Against Multiple and Unforeseen Adversarial Videos
Shao-Yuan Lo
Vishal M. Patel
AAML
72
24
0
11 Sep 2020
Second Order Optimization for Adversarial Robustness and Interpretability
Theodoros Tsiligkaridis
Jay Roberts
AAML
42
8
0
10 Sep 2020
Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent
Ricardo Bigolin Lanfredi
Joyce D. Schroeder
Tolga Tasdizen
64
12
0
10 Sep 2020
SoK: Certified Robustness for Deep Neural Networks
Linyi Li
Tao Xie
Yue Liu
AAML
123
131
0
09 Sep 2020
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective
G. R. Machado
Eugênio Silva
R. Goldschmidt
AAML
136
164
0
08 Sep 2020
Detection Defense Against Adversarial Attacks with Saliency Map
Dengpan Ye
Chuanxi Chen
Changrui Liu
Hao Wang
Shunzhi Jiang
AAML
57
28
0
06 Sep 2020
Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks
Wei-An Lin
Chun Pong Lau
Alexander Levine
Ramalingam Chellappa
Soheil Feizi
AAML
121
60
0
05 Sep 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
Wenjie Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
100
222
0
04 Sep 2020
Perceptual Deep Neural Networks: Adversarial Robustness through Input Recreation
Danilo Vasconcellos Vargas
Bingli Liao
Takahiro Kanzaki
AAML
45
3
0
02 Sep 2020
ASTRAL: Adversarial Trained LSTM-CNN for Named Entity Recognition
Jiuniu Wang
Wenjia Xu
Xingyu Fu
Guangluan Xu
Yirong Wu
50
58
0
02 Sep 2020
Simulating Unknown Target Models for Query-Efficient Black-box Attacks
Chen Ma
Lixing Chen
Junhai Yong
MLAU
OOD
93
17
0
02 Sep 2020
Adversarially Robust Neural Architectures
Minjing Dong
Yanxi Li
Yunhe Wang
Chang Xu
AAML
OOD
100
49
0
02 Sep 2020
Estimating the Brittleness of AI: Safety Integrity Levels and the Need for Testing Out-Of-Distribution Performance
A. Lohn
51
13
0
02 Sep 2020
Shape Defense Against Adversarial Attacks
Ali Borji
AAML
33
1
0
31 Aug 2020
An Integrated Approach to Produce Robust Models with High Efficiency
Zhijian Li
Bao Wang
Jack Xin
MQ
AAML
35
3
0
31 Aug 2020
Benchmarking adversarial attacks and defenses for time-series data
Shoaib Ahmed Siddiqui
Andreas Dengel
Sheraz Ahmed
AAML
AI4TS
20
11
0
30 Aug 2020
Efficient Robustness Certificates for Discrete Data: Sparsity-Aware Randomized Smoothing for Graphs, Images and More
Aleksandar Bojchevski
Johannes Klicpera
Stephan Günnemann
AAML
118
87
0
29 Aug 2020
Adversarially Robust Learning via Entropic Regularization
Gauri Jagatap
Ameya Joshi
A. B. Chowdhury
S. Garg
Chinmay Hegde
OOD
128
11
0
27 Aug 2020
On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks
Deboleena Roy
I. Chakraborty
Timur Ibrayev
Kaushik Roy
AAML
64
4
0
27 Aug 2020
Adversarially Training for Audio Classifiers
Raymel Alfonso Sallo
Mohammad Esmaeilpour
P. Cardinal
AAML
47
8
0
26 Aug 2020
Towards adversarial robustness with 01 loss neural networks
Yunzhe Xue
Meiyan Xie
Usman Roshan
OOD
AAML
66
5
0
20 Aug 2020
Yet Another Intermediate-Level Attack
Qizhang Li
Yiwen Guo
Hao Chen
AAML
59
52
0
20 Aug 2020
On
ℓ
p
\ell_p
ℓ
p
-norm Robustness of Ensemble Stumps and Trees
Yihan Wang
Huan Zhang
Hongge Chen
Duane S. Boning
Cho-Jui Hsieh
AAML
42
7
0
20 Aug 2020
Previous
1
2
3
...
23
24
25
...
37
38
39
Next