Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.00420
Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"
50 / 766 papers shown
Title
Multiplicative Reweighting for Robust Neural Network Optimization
Noga Bar
Tomer Koren
Raja Giryes
OOD
NoLa
20
9
0
24 Feb 2021
Automated Discovery of Adaptive Attacks on Adversarial Defenses
Chengyuan Yao
Pavol Bielik
Petar Tsankov
Martin Vechev
AAML
19
24
0
23 Feb 2021
Effective and Efficient Vote Attack on Capsule Networks
Jindong Gu
Baoyuan Wu
Volker Tresp
AAML
19
26
0
19 Feb 2021
Towards Adversarial-Resilient Deep Neural Networks for False Data Injection Attack Detection in Power Grids
Jiangnan Li
Yingyuan Yang
Jinyuan Stella Sun
K. Tomsovic
Hairong Qi
AAML
44
14
0
17 Feb 2021
Low Curvature Activations Reduce Overfitting in Adversarial Training
Vasu Singla
Sahil Singla
David Jacobs
S. Feizi
AAML
43
45
0
15 Feb 2021
CAP-GAN: Towards Adversarial Robustness with Cycle-consistent Attentional Purification
Mingu Kang
T. Tran
Seungju Cho
Daeyoung Kim
AAML
27
3
0
15 Feb 2021
Resilient Machine Learning for Networked Cyber Physical Systems: A Survey for Machine Learning Security to Securing Machine Learning for CPS
Felix O. Olowononi
D. Rawat
Chunmei Liu
41
134
0
14 Feb 2021
"What's in the box?!": Deflecting Adversarial Attacks by Randomly Deploying Adversarially-Disjoint Models
Sahar Abdelnabi
Mario Fritz
AAML
32
7
0
09 Feb 2021
Understanding the Interaction of Adversarial Training with Noisy Labels
Jianing Zhu
Jingfeng Zhang
Bo Han
Tongliang Liu
Gang Niu
Hongxia Yang
Mohan Kankanhalli
Masashi Sugiyama
AAML
27
27
0
06 Feb 2021
Fast Training of Provably Robust Neural Networks by SingleProp
Akhilan Boopathy
Tsui-Wei Weng
Sijia Liu
Pin-Yu Chen
Gaoyuan Zhang
Luca Daniel
AAML
11
7
0
01 Feb 2021
A Comprehensive Evaluation Framework for Deep Model Robustness
Jun Guo
Wei Bao
Jiakai Wang
Yuqing Ma
Xing Gao
Gang Xiao
Aishan Liu
Zehao Zhao
Xianglong Liu
Wenjun Wu
AAML
ELM
38
55
0
24 Jan 2021
Error Diffusion Halftoning Against Adversarial Examples
Shao-Yuan Lo
Vishal M. Patel
DiffM
15
4
0
23 Jan 2021
Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving
James Tu
Huichen Li
Xinchen Yan
Mengye Ren
Yun Chen
Ming Liang
E. Bitar
Ersin Yumer
R. Urtasun
AAML
34
76
0
17 Jan 2021
Fundamental Tradeoffs in Distributionally Adversarial Training
M. Mehrabi
Adel Javanmard
Ryan A. Rossi
Anup B. Rao
Tung Mai
AAML
22
18
0
15 Jan 2021
The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
Andreas Bär
Jonas Löhdefink
Nikhil Kapoor
Serin Varghese
Fabian Hüger
Peter Schlicht
Tim Fingscheidt
AAML
116
33
0
11 Jan 2021
DiPSeN: Differentially Private Self-normalizing Neural Networks For Adversarial Robustness in Federated Learning
Olakunle Ibitoye
M. O. Shafiq
Ashraf Matrawy
FedML
28
18
0
08 Jan 2021
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Mohamed Bennai
Mahum Naseer
T. Theocharides
C. Kyrkou
O. Mutlu
Lois Orosa
Jungwook Choi
OOD
81
100
0
04 Jan 2021
Understanding and Increasing Efficiency of Frank-Wolfe Adversarial Training
Theodoros Tsiligkaridis
Jay Roberts
AAML
24
11
0
22 Dec 2020
Self-Progressing Robust Training
Minhao Cheng
Pin-Yu Chen
Sijia Liu
Shiyu Chang
Cho-Jui Hsieh
Payel Das
AAML
VLM
29
9
0
22 Dec 2020
On Success and Simplicity: A Second Look at Transferable Targeted Attacks
Zhengyu Zhao
Zhuoran Liu
Martha Larson
AAML
46
122
0
21 Dec 2020
On the human-recognizability phenomenon of adversarially trained deep image classifiers
Jonathan W. Helland
Nathan M. VanHoudnos
AAML
27
4
0
18 Dec 2020
A Closer Look at the Robustness of Vision-and-Language Pre-trained Models
Linjie Li
Zhe Gan
Jingjing Liu
VLM
33
42
0
15 Dec 2020
Improving Adversarial Robustness via Probabilistically Compact Loss with Logit Constraints
X. Li
Xiangrui Li
Deng Pan
D. Zhu
AAML
21
17
0
14 Dec 2020
Generating Out of Distribution Adversarial Attack using Latent Space Poisoning
Ujjwal Upadhyay
Prerana Mukherjee
39
7
0
09 Dec 2020
Data-Dependent Randomized Smoothing
Motasem Alfarra
Adel Bibi
Philip Torr
Guohao Li
UQCV
28
34
0
08 Dec 2020
FAT: Federated Adversarial Training
Giulio Zizzo
Ambrish Rawat
M. Sinn
Beat Buesser
FedML
33
43
0
03 Dec 2020
Interpretable Graph Capsule Networks for Object Recognition
Jindong Gu
Volker Tresp
FAtt
24
36
0
03 Dec 2020
How Robust are Randomized Smoothing based Defenses to Data Poisoning?
Akshay Mehra
B. Kailkhura
Pin-Yu Chen
Jihun Hamm
OOD
AAML
28
32
0
02 Dec 2020
Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses
Gaurang Sriramanan
Sravanti Addepalli
Arya Baburaj
R. Venkatesh Babu
AAML
28
92
0
30 Nov 2020
A Study on the Uncertainty of Convolutional Layers in Deep Neural Networks
Hao Shen
Sihong Chen
Ran Wang
30
5
0
27 Nov 2020
Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory Architectures to Adversarial Attacks in Deep Neural Networks
Abhishek Moitra
Priyadarshini Panda
AAML
32
2
0
26 Nov 2020
Omni: Automated Ensemble with Unexpected Models against Adversarial Evasion Attack
Rui Shu
Tianpei Xia
Laurie A. Williams
Tim Menzies
AAML
32
15
0
23 Nov 2020
Learnable Boundary Guided Adversarial Training
Jiequan Cui
Shu Liu
Liwei Wang
Jiaya Jia
OOD
AAML
32
124
0
23 Nov 2020
Adversarial Classification: Necessary conditions and geometric flows
Nicolas García Trillos
Ryan W. Murray
AAML
39
19
0
21 Nov 2020
Contextual Fusion For Adversarial Robustness
Aiswarya Akumalla
S. Haney
M. Bazhenov
AAML
27
1
0
18 Nov 2020
Adversarially Robust Classification based on GLRT
Bhagyashree Puranik
Upamanyu Madhow
Ramtin Pedarsani
VLM
AAML
23
4
0
16 Nov 2020
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations
Jinyuan Jia
Binghui Wang
Xiaoyu Cao
Hongbin Liu
Neil Zhenqiang Gong
21
24
0
15 Nov 2020
A survey on practical adversarial examples for malware classifiers
Daniel Park
B. Yener
AAML
46
14
0
06 Nov 2020
Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA
Adnan Siraj Rakin
Yukui Luo
Xiaolin Xu
Deliang Fan
AAML
25
49
0
05 Nov 2020
Detecting Backdoors in Neural Networks Using Novel Feature-Based Anomaly Detection
Hao Fu
A. Veldanda
Prashanth Krishnamurthy
S. Garg
Farshad Khorrami
AAML
38
14
0
04 Nov 2020
Towards Robust Neural Networks via Orthogonal Diversity
Kun Fang
Qinghua Tao
Yingwen Wu
Tao Li
Jia Cai
Feipeng Cai
Xiaolin Huang
Jie Yang
AAML
41
8
0
23 Oct 2020
Class-Conditional Defense GAN Against End-to-End Speech Attacks
Mohammad Esmaeilpour
P. Cardinal
Alessandro Lameiras Koerich
AAML
21
14
0
22 Oct 2020
Learning Black-Box Attackers with Transferable Priors and Query Feedback
Jiancheng Yang
Yangzhou Jiang
Xiaoyang Huang
Bingbing Ni
Chenglong Zhao
AAML
18
81
0
21 Oct 2020
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
234
681
0
19 Oct 2020
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
41
48
0
19 Oct 2020
A Generative Model based Adversarial Security of Deep Learning and Linear Classifier Models
Ferhat Ozgur Catak
Samed Sivaslioglu
Kevser Sahinbas
AAML
28
7
0
17 Oct 2020
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning
Hongjun Wang
Guanbin Li
Xiaobai Liu
Liang Lin
GAN
AAML
21
22
0
15 Oct 2020
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Sven Gowal
Chongli Qin
J. Uesato
Timothy A. Mann
Pushmeet Kohli
AAML
22
325
0
07 Oct 2020
Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder
Alvin Chan
Yi Tay
Yew-Soon Ong
Aston Zhang
SILM
23
56
0
06 Oct 2020
Understanding Catastrophic Overfitting in Single-step Adversarial Training
Hoki Kim
Woojin Lee
Jaewook Lee
AAML
16
108
0
05 Oct 2020
Previous
1
2
3
...
8
9
10
...
14
15
16
Next