Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.00420
Cited By
v1
v2
v3
v4 (latest)
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"
50 / 1,929 papers shown
Title
DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses
Yaxin Li
Wei Jin
Han Xu
Jiliang Tang
AAML
90
133
0
13 May 2020
Provable Robust Classification via Learned Smoothed Densities
Saeed Saremi
R. Srivastava
AAML
88
3
0
09 May 2020
Efficient Exact Verification of Binarized Neural Networks
Kai Jia
Martin Rinard
AAML
MQ
48
59
0
07 May 2020
GraCIAS: Grassmannian of Corrupted Images for Adversarial Security
Ankita Shukla
Pavan Turaga
Saket Anand
AAML
39
1
0
06 May 2020
Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder
Guanlin Li
Shuya Ding
Jun Luo
Chang-rui Liu
AAML
107
19
0
06 May 2020
Adversarial Training against Location-Optimized Adversarial Patches
Sukrut Rao
David Stutz
Bernt Schiele
AAML
84
93
0
05 May 2020
Robust Encodings: A Framework for Combating Adversarial Typos
Erik Jones
Robin Jia
Aditi Raghunathan
Percy Liang
AAML
321
103
0
04 May 2020
A Causal View on Robustness of Neural Networks
Cheng Zhang
Kun Zhang
Yingzhen Li
CML
OOD
109
85
0
03 May 2020
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability
H. Aghakhani
Dongyu Meng
Yu-Xiang Wang
Christopher Kruegel
Giovanni Vigna
AAML
77
104
0
01 May 2020
Does Data Augmentation Improve Generalization in NLP?
Rohan Jha
Charles Lovering
Ellie Pavlick
80
10
0
30 Apr 2020
Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks
Pranjal Awasthi
Natalie Frank
M. Mohri
AAML
95
58
0
28 Apr 2020
Harnessing adversarial examples with a surprisingly simple defense
Ali Borji
AAML
31
0
0
26 Apr 2020
Improved Adversarial Training via Learned Optimizer
Yuanhao Xiong
Cho-Jui Hsieh
AAML
81
31
0
25 Apr 2020
RAIN: A Simple Approach for Robust and Accurate Image Classification Networks
Jiawei Du
Hanshu Yan
Vincent Y. F. Tan
Qiufeng Wang
Rick Siow Mong Goh
Jiashi Feng
AAML
16
0
0
24 Apr 2020
Adversarial Attacks and Defenses: An Interpretation Perspective
Ninghao Liu
Mengnan Du
Ruocheng Guo
Huan Liu
Helen Zhou
AAML
61
8
0
23 Apr 2020
Ensemble Generative Cleaning with Feedback Loops for Defending Adversarial Attacks
Jianhe Yuan
Zhihai He
AAML
64
22
0
23 Apr 2020
Improved Noise and Attack Robustness for Semantic Segmentation by Using Multi-Task Training with Self-Supervised Depth Estimation
Marvin Klingner
Andreas Bär
Tim Fingscheidt
AAML
95
41
0
23 Apr 2020
Neural Network Laundering: Removing Black-Box Backdoor Watermarks from Deep Neural Networks
William Aiken
Hyoungshick Kim
Simon S. Woo
40
64
0
22 Apr 2020
Discovering Imperfectly Observable Adversarial Actions using Anomaly Detection
Olga Petrova
K. Durkota
Galina Alperovich
Karel Horak
Michal Najman
B. Bosanský
Viliam Lisý
AAML
16
1
0
22 Apr 2020
Provably robust deep generative models
Filipe Condessa
Zico Kolter
AAML
OOD
38
5
0
22 Apr 2020
Scalable Attack on Graph Data by Injecting Vicious Nodes
Jihong Wang
Minnan Luo
Fnu Suya
Jundong Li
Z. Yang
Q. Zheng
AAML
GNN
87
90
0
22 Apr 2020
Testing Machine Translation via Referential Transparency
Pinjia He
Clara Meister
Z. Su
59
51
0
22 Apr 2020
Single-step Adversarial training with Dropout Scheduling
S. VivekB.
R. Venkatesh Babu
OOD
AAML
65
73
0
18 Apr 2020
A Framework for Enhancing Deep Neural Networks Against Adversarial Malware
Deqiang Li
Qianmu Li
Yanfang Ye
Shouhuai Xu
AAML
75
13
0
15 Apr 2020
Adversarial Weight Perturbation Helps Robust Generalization
Dongxian Wu
Shutao Xia
Yisen Wang
OOD
AAML
60
17
0
13 Apr 2020
Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
Michael Everett
Bjorn Lutjens
Jonathan P. How
AAML
53
42
0
11 Apr 2020
Adversarial Attacks on Machine Learning Cybersecurity Defences in Industrial Control Systems
Eirini Anthi
Lowri Williams
Matilda Rhode
Pete Burnap
Adam Wedgbury
AAML
51
126
0
10 Apr 2020
On Adversarial Examples and Stealth Attacks in Artificial Intelligence Systems
I. Tyukin
D. Higham
A. Gorban
AAML
46
39
0
09 Apr 2020
Approximate Manifold Defense Against Multiple Adversarial Perturbations
Jay Nandy
Wynne Hsu
Mong Li Lee
AAML
65
12
0
05 Apr 2020
SOAR: Second-Order Adversarial Regularization
A. Ma
Fartash Faghri
Nicolas Papernot
Amir-massoud Farahmand
AAML
35
4
0
04 Apr 2020
Evading Deepfake-Image Detectors with White- and Black-Box Attacks
Nicholas Carlini
Hany Farid
AAML
81
150
0
01 Apr 2020
Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes
Sravanti Addepalli
S. VivekB.
Arya Baburaj
Gaurang Sriramanan
R. Venkatesh Babu
AAML
31
32
0
01 Apr 2020
MetaPoison: Practical General-purpose Clean-label Data Poisoning
Wenjie Huang
Jonas Geiping
Liam H. Fowl
Gavin Taylor
Tom Goldstein
132
190
0
01 Apr 2020
Inverting Gradients -- How easy is it to break privacy in federated learning?
Jonas Geiping
Hartmut Bauermeister
Hannah Dröge
Michael Moeller
FedML
136
1,238
0
31 Mar 2020
Improved Gradient based Adversarial Attacks for Quantized Networks
Kartik Gupta
Thalaiyasingam Ajanthan
MQ
58
19
0
30 Mar 2020
Towards Deep Learning Models Resistant to Large Perturbations
Amirreza Shaeiri
Rozhin Nobahari
M. Rohban
OOD
AAML
81
12
0
30 Mar 2020
Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
Tianlong Chen
Sijia Liu
Shiyu Chang
Yu Cheng
Lisa Amini
Zhangyang Wang
AAML
71
250
0
28 Mar 2020
Adversarial Imitation Attack
Mingyi Zhou
Jing Wu
Yipeng Liu
Xiaolin Huang
Shuaicheng Liu
Xiang Zhang
Ce Zhu
AAML
39
0
0
28 Mar 2020
DaST: Data-free Substitute Training for Adversarial Attacks
Mingyi Zhou
Jing Wu
Yipeng Liu
Shuaicheng Liu
Ce Zhu
86
145
0
28 Mar 2020
A Separation Result Between Data-oblivious and Data-aware Poisoning Attacks
Samuel Deng
Sanjam Garg
S. Jha
Saeed Mahloujifar
Mohammad Mahmoody
Abhradeep Thakurta
37
3
0
26 Mar 2020
Defense Through Diverse Directions
Christopher M. Bender
Yang Li
Yifeng Shi
Michael K. Reiter
Junier B. Oliva
AAML
51
4
0
24 Mar 2020
Systematic Evaluation of Privacy Risks of Machine Learning Models
Liwei Song
Prateek Mittal
MIACV
396
378
0
24 Mar 2020
Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations
Saima Sharmin
Nitin Rathi
Priyadarshini Panda
Kaushik Roy
AAML
159
92
0
23 Mar 2020
Adversarial Attacks on Monocular Depth Estimation
Ziqi Zhang
Xinge Zhu
Yingwei Li
Xiangqun Chen
Yao Guo
AAML
MDE
83
26
0
23 Mar 2020
Robust Out-of-distribution Detection for Neural Networks
Jiefeng Chen
Yixuan Li
Xi Wu
Yingyu Liang
S. Jha
OODD
231
88
0
21 Mar 2020
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
193
102
0
20 Mar 2020
Investigating Image Applications Based on Spatial-Frequency Transform and Deep Learning Techniques
Qinkai Zheng
Han Qiu
G. Memmi
Isabelle Bloch
27
0
0
20 Mar 2020
Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates
Amin Ghiasi
Ali Shafahi
Tom Goldstein
102
55
0
19 Mar 2020
RAB: Provable Robustness Against Backdoor Attacks
Maurice Weber
Xiaojun Xu
Bojan Karlas
Ce Zhang
Yue Liu
AAML
120
164
0
19 Mar 2020
SAT: Improving Adversarial Training via Curriculum-Based Loss Smoothing
Chawin Sitawarin
S. Chakraborty
David Wagner
AAML
74
40
0
18 Mar 2020
Previous
1
2
3
...
26
27
28
...
37
38
39
Next