Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.00420
Cited By
v1
v2
v3
v4 (latest)
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"
50 / 1,929 papers shown
Title
Square Attack: a query-efficient black-box adversarial attack via random search
Maksym Andriushchenko
Francesco Croce
Nicolas Flammarion
Matthias Hein
AAML
159
997
0
29 Nov 2019
Attributional Robustness Training using Input-Gradient Spatial Alignment
M. Singh
Nupur Kumari
Puneet Mangla
Abhishek Sinha
V. Balasubramanian
Balaji Krishnamurthy
OOD
98
10
0
29 Nov 2019
Indirect Local Attacks for Context-aware Semantic Segmentation Networks
Krishna Kanth Nakka
Mathieu Salzmann
SSeg
AAML
64
31
0
29 Nov 2019
Can Attention Masks Improve Adversarial Robustness?
Pratik Vaishnavi
Tianji Cong
Kevin Eykholt
A. Prakash
Amir Rahmati
AAML
142
12
0
27 Nov 2019
Survey of Attacks and Defenses on Edge-Deployed Neural Networks
Mihailo Isakov
V. Gadepally
K. Gettings
Michel A. Kinsy
AAML
51
31
0
27 Nov 2019
One Man's Trash is Another Man's Treasure: Resisting Adversarial Examples by Adversarial Examples
Chang Xiao
Changxi Zheng
AAML
74
19
0
25 Nov 2019
Invert and Defend: Model-based Approximate Inversion of Generative Adversarial Networks for Secure Inference
Wei-An Lin
Yogesh Balaji
Pouya Samangouei
Rama Chellappa
61
6
0
23 Nov 2019
Controversial stimuli: pitting neural networks against each other as models of human recognition
Tal Golan
Prashant C. Raju
N. Kriegeskorte
AAML
80
39
0
21 Nov 2019
Fine-grained Synthesis of Unrestricted Adversarial Examples
Omid Poursaeed
Tianxing Jiang
Yordanos Goshu
Harry Yang
Serge J. Belongie
Ser-Nam Lim
AAML
115
13
0
20 Nov 2019
Robust Deep Neural Networks Inspired by Fuzzy Logic
Minh Le
OOD
AAML
AI4CE
118
0
0
20 Nov 2019
Defective Convolutional Networks
Tiange Luo
Tianle Cai
Mengxiao Zhang
Siyu Chen
Di He
Liwei Wang
AAML
48
3
0
19 Nov 2019
Poison as a Cure: Detecting & Neutralizing Variable-Sized Backdoor Attacks in Deep Neural Networks
Alvin Chan
Yew-Soon Ong
AAML
68
43
0
19 Nov 2019
WITCHcraft: Efficient PGD attacks with random step size
Ping Yeh-Chiang
Jonas Geiping
Micah Goldblum
Tom Goldstein
Renkun Ni
Steven Reich
Ali Shafahi
AAML
63
11
0
18 Nov 2019
Smoothed Inference for Adversarially-Trained Models
Yaniv Nemcovsky
Evgenii Zheltonozhskii
Chaim Baskin
Brian Chmiel
Maxim Fishman
A. Bronstein
A. Mendelson
AAML
FedML
53
2
0
17 Nov 2019
Black-Box Adversarial Attack with Transferable Model-based Embedding
Zhichao Huang
Tong Zhang
77
119
0
17 Nov 2019
Defensive Few-shot Learning
Wenbin Li
Lei Wang
Xingxing Zhang
Lei Qi
Jing Huo
Yang Gao
Jiebo Luo
83
7
0
16 Nov 2019
Adversarial Embedding: A robust and elusive Steganography and Watermarking technique
Salah Ghamizi
Maxime Cordy
Mike Papadakis
Yves Le Traon
WIGM
AAML
50
7
0
14 Nov 2019
Adversarial Examples in Modern Machine Learning: A Review
R. Wiyatno
Anqi Xu
Ousmane Amadou Dia
A. D. Berker
AAML
127
105
0
13 Nov 2019
Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory
Arash Rahnama
A. Nguyen
Edward Raff
AAML
43
20
0
12 Nov 2019
A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models
Ren Pang
Hua Shen
Xinyang Zhang
S. Ji
Yevgeniy Vorobeychik
Xiaopu Luo
Alex Liu
Ting Wang
AAML
64
2
0
05 Nov 2019
Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems
Guangke Chen
Sen Chen
Lingling Fan
Xiaoning Du
Zhe Zhao
Fu Song
Yang Liu
AAML
114
197
0
03 Nov 2019
MadNet: Using a MAD Optimization for Defending Against Adversarial Attacks
Shai Rozenberg
G. Elidan
Ran El-Yaniv
AAML
41
1
0
03 Nov 2019
Enhancing Certifiable Robustness via a Deep Model Ensemble
Huan Zhang
Minhao Cheng
Cho-Jui Hsieh
72
9
0
31 Oct 2019
A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning
Xuanqing Liu
Si Si
Xiaojin Zhu
Yang Li
Cho-Jui Hsieh
AAML
96
79
0
30 Oct 2019
Certified Adversarial Robustness for Deep Reinforcement Learning
Björn Lütjens
Michael Everett
Jonathan P. How
AAML
105
96
0
28 Oct 2019
Understanding and Quantifying Adversarial Examples Existence in Linear Classification
Xupeng Shi
A. Ding
AAML
48
3
0
27 Oct 2019
Detection of Adversarial Attacks and Characterization of Adversarial Subspace
Mohammad Esmaeilpour
P. Cardinal
Alessandro Lameiras Koerich
AAML
54
17
0
26 Oct 2019
Effectiveness of random deep feature selection for securing image manipulation detectors against adversarial examples
Mauro Barni
Ehsan Nowroozi
B. Tondi
Bowen Zhang
AAML
60
17
0
25 Oct 2019
Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?
Ali Shafahi
Amin Ghiasi
Furong Huang
Tom Goldstein
AAML
74
41
0
25 Oct 2019
Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks
Alexander Levine
Soheil Feizi
AAML
62
61
0
23 Oct 2019
A Useful Taxonomy for Adversarial Robustness of Neural Networks
L. Smith
AAML
53
6
0
23 Oct 2019
Modeling plate and spring reverberation using a DSP-informed deep neural network
M. M. Ramírez
Emmanouil Benetos
Joshua D. Reiss
54
7
0
22 Oct 2019
Adversarial Example Detection by Classification for Deep Speech Recognition
Saeid Samizade
Zheng-Hua Tan
Chao Shen
X. Guan
AAML
79
35
0
22 Oct 2019
Structure Matters: Towards Generating Transferable Adversarial Images
Dan Peng
Zizhan Zheng
Linhao Luo
Xiaofeng Zhang
AAML
70
2
0
22 Oct 2019
An Alternative Surrogate Loss for PGD-based Adversarial Testing
Sven Gowal
J. Uesato
Chongli Qin
Po-Sen Huang
Timothy A. Mann
Pushmeet Kohli
AAML
107
90
0
21 Oct 2019
Are Perceptually-Aligned Gradients a General Property of Robust Classifiers?
Simran Kaur
Jeremy M. Cohen
Zachary Chase Lipton
OOD
AAML
69
66
0
18 Oct 2019
A Fast Saddle-Point Dynamical System Approach to Robust Deep Learning
Yasaman Esfandiari
Aditya Balu
K. Ebrahimi
Umesh Vaidya
N. Elia
Soumik Sarkar
OOD
59
3
0
18 Oct 2019
Adversarial T-shirt! Evading Person Detectors in A Physical World
Kaidi Xu
Gaoyuan Zhang
Sijia Liu
Quanfu Fan
Mengshu Sun
Hongge Chen
Pin-Yu Chen
Yanzhi Wang
Xue Lin
AAML
90
30
0
18 Oct 2019
Enforcing Linearity in DNN succours Robustness and Adversarial Image Generation
A. Sarkar
Nikhil Kumar Gupta
Raghu Sesha Iyengar
AAML
43
11
0
17 Oct 2019
A New Defense Against Adversarial Images: Turning a Weakness into a Strength
Tao Yu
Shengyuan Hu
Chuan Guo
Wei-Lun Chao
Kilian Q. Weinberger
AAML
120
103
0
16 Oct 2019
Extracting robust and accurate features via a robust information bottleneck
Ankit Pensia
Varun Jog
Po-Ling Loh
AAML
78
21
0
15 Oct 2019
DeepSearch: A Simple and Effective Blackbox Attack for Deep Neural Networks
Fuyuan Zhang
Sankalan Pal Chowdhury
M. Christakis
AAML
58
8
0
14 Oct 2019
Real-world adversarial attack on MTCNN face detection system
Edgar Kaziakhmedov
Klim Kireev
Grigorii Melnikov
Mikhail Aleksandrovich Pautov
Aleksandr Petiushko
CVBM
AAML
73
41
0
14 Oct 2019
Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks
David Stutz
Matthias Hein
Bernt Schiele
AAML
89
5
0
14 Oct 2019
Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models
Derui Wang
Wang
Chaoran Li
S. Wen
Surya Nepal
Yang Xiang
AAML
34
35
0
14 Oct 2019
Directional Adversarial Training for Cost Sensitive Deep Learning Classification Applications
M. Terzi
Gian Antonio Susto
Pratik Chaudhari
OOD
AAML
54
16
0
08 Oct 2019
AdvSPADE: Realistic Unrestricted Attacks for Semantic Segmentation
Guangyu Shen
Chengzhi Mao
Junfeng Yang
Baishakhi Ray
GAN
52
12
0
06 Oct 2019
BUZz: BUffer Zones for defending adversarial examples in image classification
Kaleel Mahmood
Phuong Ha Nguyen
Lam M. Nguyen
THANH VAN NGUYEN
Marten van Dijk
AAML
62
6
0
03 Oct 2019
Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions
He Zhao
Trung Le
Paul Montague
O. Vel
Tamas Abraham
Dinh Q. Phung
AAML
62
8
0
03 Oct 2019
Adversarially Robust Few-Shot Learning: A Meta-Learning Approach
Micah Goldblum
Liam H. Fowl
Tom Goldstein
83
13
0
02 Oct 2019
Previous
1
2
3
...
29
30
31
...
37
38
39
Next