ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
v1v2v3v4 (latest)

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXiv (abs)PDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 1,929 papers shown
Title
Multi-way Encoding for Robustness
Multi-way Encoding for Robustness
Donghyun Kim
Sarah Adel Bargal
Jianming Zhang
Stan Sclaroff
AAML
41
2
0
05 Jun 2019
Adversarial Training is a Form of Data-dependent Operator Norm
  Regularization
Adversarial Training is a Form of Data-dependent Operator Norm Regularization
Kevin Roth
Yannic Kilcher
Thomas Hofmann
58
13
0
04 Jun 2019
Adversarially Robust Generalization Just Requires More Unlabeled Data
Adversarially Robust Generalization Just Requires More Unlabeled Data
Runtian Zhai
Tianle Cai
Di He
Chen Dan
Kun He
John E. Hopcroft
Liwei Wang
98
158
0
03 Jun 2019
Enhancing Transformation-based Defenses using a Distribution Classifier
Enhancing Transformation-based Defenses using a Distribution Classifier
C. Kou
H. Lee
E. Chang
Teck Khim Ng
67
3
0
01 Jun 2019
Unlabeled Data Improves Adversarial Robustness
Unlabeled Data Improves Adversarial Robustness
Y. Carmon
Aditi Raghunathan
Ludwig Schmidt
Percy Liang
John C. Duchi
130
754
0
31 May 2019
Are Labels Required for Improving Adversarial Robustness?
Are Labels Required for Improving Adversarial Robustness?
J. Uesato
Jean-Baptiste Alayrac
Po-Sen Huang
Robert Stanforth
Alhussein Fawzi
Pushmeet Kohli
AAML
97
335
0
31 May 2019
Bypassing Backdoor Detection Algorithms in Deep Learning
Bypassing Backdoor Detection Algorithms in Deep Learning
T. Tan
Reza Shokri
FedMLAAML
108
152
0
31 May 2019
Robust Sparse Regularization: Simultaneously Optimizing Neural Network
  Robustness and Compactness
Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness
Adnan Siraj Rakin
Zhezhi He
Li Yang
Yanzhi Wang
Liqiang Wang
Deliang Fan
AAML
96
21
0
30 May 2019
Bandlimiting Neural Networks Against Adversarial Attacks
Bandlimiting Neural Networks Against Adversarial Attacks
Yuping Lin
A. KasraAhmadiK.
Hui Jiang
AAML
42
6
0
30 May 2019
Empirically Measuring Concentration: Fundamental Limits on Intrinsic
  Robustness
Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness
Saeed Mahloujifar
Xiao Zhang
Mohammad Mahmoody
David Evans
67
22
0
29 May 2019
Certifiably Robust Interpretation in Deep Learning
Certifiably Robust Interpretation in Deep Learning
Alexander Levine
Sahil Singla
Soheil Feizi
FAttAAML
93
65
0
28 May 2019
ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation
ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation
Yuzhe Yang
Guo Zhang
Dina Katabi
Zhi Xu
AAML
100
171
0
28 May 2019
Improving the Robustness of Deep Neural Networks via Adversarial
  Training with Triplet Loss
Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss
Pengcheng Li
Jinfeng Yi
Bowen Zhou
Lijun Zhang
AAML
65
37
0
28 May 2019
Brain-inspired reverse adversarial examples
Brain-inspired reverse adversarial examples
Shaokai Ye
S. Tan
Kaidi Xu
Yanzhi Wang
Chenglong Bao
Kaisheng Ma
AAML
25
5
0
28 May 2019
Adversarially Robust Learning Could Leverage Computational Hardness
Adversarially Robust Learning Could Leverage Computational Hardness
Sanjam Garg
S. Jha
Saeed Mahloujifar
Mohammad Mahmoody
AAML
163
24
0
28 May 2019
GAT: Generative Adversarial Training for Adversarial Example Detection
  and Robust Classification
GAT: Generative Adversarial Training for Adversarial Example Detection and Robust Classification
Xuwang Yin
Soheil Kolouri
Gustavo K. Rohde
AAML
106
44
0
27 May 2019
Scaleable input gradient regularization for adversarial robustness
Scaleable input gradient regularization for adversarial robustness
Chris Finlay
Adam M. Oberman
AAML
101
79
0
27 May 2019
Provable robustness against all adversarial $l_p$-perturbations for
  $p\geq 1$
Provable robustness against all adversarial lpl_plp​-perturbations for p≥1p\geq 1p≥1
Francesco Croce
Matthias Hein
OOD
78
75
0
27 May 2019
Non-Determinism in Neural Networks for Adversarial Robustness
Non-Determinism in Neural Networks for Adversarial Robustness
Daanish Ali Khan
Linhong Li
Ninghao Sha
Zhuoran Liu
Abelino Jiménez
Bhiksha Raj
Rita Singh
OODAAML
33
3
0
26 May 2019
Robust Classification using Robust Feature Augmentation
Robust Classification using Robust Feature Augmentation
Kevin Eykholt
Swati Gupta
Atul Prakash
Amir Rahmati
Pratik Vaishnavi
Haizhong Zheng
AAML
59
2
0
26 May 2019
Rearchitecting Classification Frameworks For Increased Robustness
Rearchitecting Classification Frameworks For Increased Robustness
Varun Chandrasekaran
Brian Tang
Nicolas Papernot
Kassem Fawaz
S. Jha
Xi Wu
AAMLOOD
100
8
0
26 May 2019
State-Reification Networks: Improving Generalization by Modeling the
  Distribution of Hidden Representations
State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations
Alex Lamb
Jonathan Binas
Anirudh Goyal
Sandeep Subramanian
Ioannis Mitliagkas
Denis Kazakov
Yoshua Bengio
Michael C. Mozer
OOD
45
3
0
26 May 2019
Purifying Adversarial Perturbation with Adversarially Trained
  Auto-encoders
Purifying Adversarial Perturbation with Adversarially Trained Auto-encoders
Hebi Li
Qi Xiao
Shixin Tian
Jin Tian
AAML
65
4
0
26 May 2019
Trust but Verify: An Information-Theoretic Explanation for the
  Adversarial Fragility of Machine Learning Systems, and a General Defense
  against Adversarial Attacks
Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks
Jirong Yi
Hui Xie
Leixin Zhou
Xiaodong Wu
Weiyu Xu
R. Mudumbai
AAML
75
6
0
25 May 2019
Enhancing Adversarial Defense by k-Winners-Take-All
Enhancing Adversarial Defense by k-Winners-Take-All
Chang Xiao
Peilin Zhong
Changxi Zheng
AAML
80
99
0
25 May 2019
Regula Sub-rosa: Latent Backdoor Attacks on Deep Neural Networks
Regula Sub-rosa: Latent Backdoor Attacks on Deep Neural Networks
Yuanshun Yao
Huiying Li
Haitao Zheng
Ben Y. Zhao
AAML
55
13
0
24 May 2019
Privacy Risks of Securing Machine Learning Models against Adversarial
  Examples
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILMMIACVAAML
91
248
0
24 May 2019
Generative adversarial network based on chaotic time series
Generative adversarial network based on chaotic time series
Makoto Naruse
Takashi Matsubara
N. Chauvet
Kazutaka Kanno
Tianyu Yang
Atsushi Uchida
AI4TSGAN
72
19
0
24 May 2019
Thwarting finite difference adversarial attacks with output
  randomization
Thwarting finite difference adversarial attacks with output randomization
Haidar Khan
Daniel Park
Azer Khan
B. Yener
SILMAAML
52
0
0
23 May 2019
Interpreting Adversarially Trained Convolutional Neural Networks
Interpreting Adversarially Trained Convolutional Neural Networks
Tianyuan Zhang
Zhanxing Zhu
AAMLGANFAtt
123
161
0
23 May 2019
Adversarially robust transfer learning
Adversarially robust transfer learning
Ali Shafahi
Parsa Saadatpanah
Chen Zhu
Amin Ghiasi
Christoph Studer
David Jacobs
Tom Goldstein
OOD
52
117
0
20 May 2019
What Do Adversarially Robust Models Look At?
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
32
5
0
19 May 2019
Simple Black-box Adversarial Attacks
Simple Black-box Adversarial Attacks
Chuan Guo
Jacob R. Gardner
Yurong You
A. Wilson
Kilian Q. Weinberger
AAML
78
581
0
17 May 2019
A critique of the DeepSec Platform for Security Analysis of Deep
  Learning Models
A critique of the DeepSec Platform for Security Analysis of Deep Learning Models
Nicholas Carlini
ELM
68
14
0
17 May 2019
Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial
  Optimization
Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization
Seungyong Moon
Gaon An
Hyun Oh Song
AAMLMLAU
88
136
0
16 May 2019
On Norm-Agnostic Robustness of Adversarial Training
On Norm-Agnostic Robustness of Adversarial Training
Bai Li
Changyou Chen
Wenlin Wang
Lawrence Carin
OODSILM
68
7
0
15 May 2019
Robustification of deep net classifiers by key based diversified
  aggregation with pre-filtering
Robustification of deep net classifiers by key based diversified aggregation with pre-filtering
O. Taran
Shideh Rezaeifar
T. Holotyak
Svyatoslav Voloshynovskiy
AAML
52
1
0
14 May 2019
Harnessing the Vulnerability of Latent Layers in Adversarially Trained
  Models
Harnessing the Vulnerability of Latent Layers in Adversarially Trained Models
M. Singh
Abhishek Sinha
Nupur Kumari
Harshitha Machiraju
Balaji Krishnamurthy
V. Balasubramanian
AAML
56
61
0
13 May 2019
Moving Target Defense for Deep Visual Sensing against Adversarial
  Examples
Moving Target Defense for Deep Visual Sensing against Adversarial Examples
Qun Song
Zhenyu Yan
Rui Tan
AAML
45
20
0
11 May 2019
Universal Adversarial Perturbations for Speech Recognition Systems
Universal Adversarial Perturbations for Speech Recognition Systems
Paarth Neekhara
Shehzeen Samarah Hussain
Prakhar Pandey
Shlomo Dubnov
Julian McAuley
F. Koushanfar
AAML
82
118
0
09 May 2019
Adversarial Defense Framework for Graph Neural Network
Adversarial Defense Framework for Graph Neural Network
Shen Wang
Zhengzhang Chen
Jingchao Ni
Xiao Yu
Zhichun Li
Haifeng Chen
Philip S. Yu
AAMLGNN
71
28
0
09 May 2019
An Empirical Evaluation of Adversarial Robustness under Transfer
  Learning
An Empirical Evaluation of Adversarial Robustness under Transfer Learning
Todor Davchev
Timos Korres
Stathi Fotiadis
N. Antonopoulos
S. Ramamoorthy
AAML
38
0
0
07 May 2019
Adaptive Generation of Unrestricted Adversarial Inputs
Adaptive Generation of Unrestricted Adversarial Inputs
Isaac Dunn
Hadrien Pouget
T. Melham
Daniel Kroening
AAML
61
7
0
07 May 2019
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
113
1,846
0
06 May 2019
Batch Normalization is a Cause of Adversarial Vulnerability
Batch Normalization is a Cause of Adversarial Vulnerability
A. Galloway
A. Golubeva
T. Tanay
M. Moussa
Graham W. Taylor
ODLAAML
84
80
0
06 May 2019
Better the Devil you Know: An Analysis of Evasion Attacks using
  Out-of-Distribution Adversarial Examples
Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples
Vikash Sehwag
A. Bhagoji
Liwei Song
Chawin Sitawarin
Daniel Cullina
M. Chiang
Prateek Mittal
OODD
77
26
0
05 May 2019
Transfer of Adversarial Robustness Between Perturbation Types
Transfer of Adversarial Robustness Between Perturbation Types
Daniel Kang
Yi Sun
Tom B. Brown
Dan Hendrycks
Jacob Steinhardt
AAML
71
49
0
03 May 2019
You Only Propagate Once: Accelerating Adversarial Training via Maximal
  Principle
You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle
Dinghuai Zhang
Tianyuan Zhang
Yiping Lu
Zhanxing Zhu
Bin Dong
AAML
130
362
0
02 May 2019
Adversarial Training with Voronoi Constraints
Adversarial Training with Voronoi Constraints
Marc Khoury
Dylan Hadfield-Menell
AAML
63
24
0
02 May 2019
NATTACK: Learning the Distributions of Adversarial Examples for an
  Improved Black-Box Attack on Deep Neural Networks
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
Yandong Li
Lijun Li
Liqiang Wang
Tong Zhang
Boqing Gong
AAML
86
245
0
01 May 2019
Previous
123...323334...373839
Next