ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
v1v2v3v4 (latest)

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXiv (abs)PDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 1,929 papers shown
Title
One Bit Matters: Understanding Adversarial Examples as the Abuse of
  Redundancy
One Bit Matters: Understanding Adversarial Examples as the Abuse of Redundancy
Jingkang Wang
R. Jia
Gerald Friedland
Yangqiu Song
C. Spanos
AAML
40
4
0
23 Oct 2018
Adversarial Risk Bounds via Function Transformation
Adversarial Risk Bounds via Function Transformation
Justin Khim
Po-Ling Loh
AAML
90
50
0
22 Oct 2018
Cost-Sensitive Robustness against Adversarial Examples
Cost-Sensitive Robustness against Adversarial Examples
Xiao Zhang
David Evans
AAML
76
26
0
22 Oct 2018
On Extensions of CLEVER: A Neural Network Robustness Evaluation
  Algorithm
On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm
Tsui-Wei Weng
Huan Zhang
Pin-Yu Chen
A. Lozano
Cho-Jui Hsieh
Luca Daniel
51
10
0
19 Oct 2018
Provable Robustness of ReLU networks via Maximization of Linear Regions
Provable Robustness of ReLU networks via Maximization of Linear Regions
Francesco Croce
Maksym Andriushchenko
Matthias Hein
92
166
0
17 Oct 2018
Projecting Trouble: Light Based Adversarial Attacks on Deep Learning
  Classifiers
Projecting Trouble: Light Based Adversarial Attacks on Deep Learning Classifiers
Nicole Nichols
Robert J. Jasper
AAML
51
15
0
16 Oct 2018
Security Matters: A Survey on Adversarial Machine Learning
Security Matters: A Survey on Adversarial Machine Learning
Guofu Li
Pengjia Zhu
Jin Li
Zhemin Yang
Ning Cao
Zhiyi Chen
AAML
90
25
0
16 Oct 2018
Deep Reinforcement Learning
Deep Reinforcement Learning
Yuxi Li
VLMOffRL
194
144
0
15 Oct 2018
Is PGD-Adversarial Training Necessary? Alternative Training via a Soft-Quantization Network with Noisy-Natural Samples Only
T. Zheng
Changyou Chen
K. Ren
AAML
57
6
0
10 Oct 2018
The Outer Product Structure of Neural Network Derivatives
The Outer Product Structure of Neural Network Derivatives
Craig Bakker
Michael J. Henry
Nathan Oken Hodas
18
3
0
09 Oct 2018
Feature Prioritization and Regularization Improve Standard Accuracy and
  Adversarial Robustness
Feature Prioritization and Regularization Improve Standard Accuracy and Adversarial Robustness
Chihuang Liu
Joseph Jaja
AAML
67
12
0
04 Oct 2018
Can Adversarially Robust Learning Leverage Computational Hardness?
Can Adversarially Robust Learning Leverage Computational Hardness?
Saeed Mahloujifar
Mohammad Mahmoody
AAMLOOD
74
48
0
02 Oct 2018
Adversarial Examples - A Complete Characterisation of the Phenomenon
Adversarial Examples - A Complete Characterisation of the Phenomenon
A. Serban
E. Poll
Joost Visser
SILMAAML
102
49
0
02 Oct 2018
Improved robustness to adversarial examples using Lipschitz regularization of the loss
Chris Finlay
Adam M. Oberman
B. Abbasi
80
34
0
01 Oct 2018
Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural
  Network
Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network
Xuanqing Liu
Yao Li
Chongruo Wu
Cho-Jui Hsieh
AAMLOOD
93
171
0
01 Oct 2018
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep
  Convolutional Networks
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks
Kenneth T. Co
Luis Muñoz-González
Sixte de Maupeou
Emil C. Lupu
AAML
74
67
0
30 Sep 2018
CAAD 2018: Generating Transferable Adversarial Examples
CAAD 2018: Generating Transferable Adversarial Examples
Yash Sharma
Tien-Dung Le
M. Alzantot
AAMLSILM
85
7
0
29 Sep 2018
Characterizing Audio Adversarial Examples Using Temporal Dependency
Characterizing Audio Adversarial Examples Using Temporal Dependency
Zhuolin Yang
Yue Liu
Pin-Yu Chen
Basel Alomair
AAML
69
165
0
28 Sep 2018
Vision-based Navigation of Autonomous Vehicle in Roadway Environments
  with Unexpected Hazards
Vision-based Navigation of Autonomous Vehicle in Roadway Environments with Unexpected Hazards
Mhafuzul Islam
M. Chowdhury
Hongda Li
Hongxin Hu
AAML
34
12
0
27 Sep 2018
Low Frequency Adversarial Perturbation
Low Frequency Adversarial Perturbation
Chuan Guo
Jared S. Frank
Kilian Q. Weinberger
AAML
84
168
0
24 Sep 2018
Adversarial Defense via Data Dependent Activation Function and Total
  Variation Minimization
Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization
Bao Wang
A. Lin
Weizhi Zhu
Penghang Yin
Andrea L. Bertozzi
Stanley J. Osher
AAML
41
20
0
23 Sep 2018
Unrestricted Adversarial Examples
Unrestricted Adversarial Examples
Tom B. Brown
Nicholas Carlini
Chiyuan Zhang
Catherine Olsson
Paul Christiano
Ian Goodfellow
AAML
81
103
0
22 Sep 2018
Playing the Game of Universal Adversarial Perturbations
Playing the Game of Universal Adversarial Perturbations
Julien Perolat
Mateusz Malinowski
Bilal Piot
Olivier Pietquin
AAML
69
25
0
20 Sep 2018
HashTran-DNN: A Framework for Enhancing Robustness of Deep Neural
  Networks against Adversarial Malware Samples
HashTran-DNN: A Framework for Enhancing Robustness of Deep Neural Networks against Adversarial Malware Samples
Deqiang Li
Ramesh Baral
Tao Li
Han Wang
Qianmu Li
Shouhuai Xu
AAML
63
21
0
18 Sep 2018
Defensive Dropout for Hardening Deep Neural Networks under Adversarial
  Attacks
Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks
Siyue Wang
Tianlin Li
Pu Zhao
Wujie Wen
David Kaeli
S. Chin
Xinyu Lin
AAML
76
70
0
13 Sep 2018
Query-Efficient Black-Box Attack by Active Learning
Query-Efficient Black-Box Attack by Active Learning
Pengcheng Li
Jinfeng Yi
Lijun Zhang
AAMLMLAU
73
55
0
13 Sep 2018
On the Structural Sensitivity of Deep Convolutional Networks to the
  Directions of Fourier Basis Functions
On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions
Yusuke Tsuzuku
Issei Sato
AAML
82
62
0
11 Sep 2018
Certified Adversarial Robustness with Additive Noise
Certified Adversarial Robustness with Additive Noise
Bai Li
Changyou Chen
Wenlin Wang
Lawrence Carin
AAML
117
350
0
10 Sep 2018
Training for Faster Adversarial Robustness Verification via Inducing
  ReLU Stability
Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability
Kai Y. Xiao
Vincent Tjeng
Nur Muhammad (Mahi) Shafiullah
Aleksander Madry
AAMLOOD
74
202
0
09 Sep 2018
Why Do Adversarial Attacks Transfer? Explaining Transferability of
  Evasion and Poisoning Attacks
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Ambra Demontis
Marco Melis
Maura Pintor
Matthew Jagielski
Battista Biggio
Alina Oprea
Cristina Nita-Rotaru
Fabio Roli
SILMAAML
66
11
0
08 Sep 2018
Structure-Preserving Transformation: Generating Diverse and Transferable
  Adversarial Examples
Structure-Preserving Transformation: Generating Diverse and Transferable Adversarial Examples
Dan Peng
Zizhan Zheng
Xiaofeng Zhang
AAML
57
5
0
08 Sep 2018
Are adversarial examples inevitable?
Are adversarial examples inevitable?
Ali Shafahi
Wenjie Huang
Christoph Studer
Soheil Feizi
Tom Goldstein
SILM
88
283
0
06 Sep 2018
Bridging machine learning and cryptography in defence against
  adversarial attacks
Bridging machine learning and cryptography in defence against adversarial attacks
O. Taran
Shideh Rezaeifar
Svyatoslav Voloshynovskiy
AAML
57
22
0
05 Sep 2018
MULDEF: Multi-model-based Defense Against Adversarial Examples for
  Neural Networks
MULDEF: Multi-model-based Defense Against Adversarial Examples for Neural Networks
Siwakorn Srisakaokul
Yuhao Zhang
Zexuan Zhong
Wei Yang
Tao Xie
Bo Li
AAML
87
19
0
31 Aug 2018
Reinforcement Learning for Autonomous Defence in Software-Defined
  Networking
Reinforcement Learning for Autonomous Defence in Software-Defined Networking
Yi Han
Benjamin I. P. Rubinstein
Tamas Abraham
T. Alpcan
O. Vel
S. Erfani
David Hubczenko
C. Leckie
Paul Montague
AAML
55
69
0
17 Aug 2018
Distributionally Adversarial Attack
Distributionally Adversarial Attack
T. Zheng
Changyou Chen
K. Ren
OOD
101
123
0
16 Aug 2018
Adversarial Vision Challenge
Adversarial Vision Challenge
Wieland Brendel
Jonas Rauber
Alexey Kurakin
Nicolas Papernot
Behar Veliqi
M. Salathé
Sharada Mohanty
Matthias Bethge
AAML
79
58
0
06 Aug 2018
Structured Adversarial Attack: Towards General Implementation and Better
  Interpretability
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
Kaidi Xu
Sijia Liu
Pu Zhao
Pin-Yu Chen
Huan Zhang
Quanfu Fan
Deniz Erdogmus
Yanzhi Wang
Xinyu Lin
AAML
126
162
0
05 Aug 2018
Security and Privacy Issues in Deep Learning
Security and Privacy Issues in Deep Learning
Ho Bae
Jaehee Jang
Dahuin Jung
Hyemi Jang
Heonseok Ha
Hyungyu Lee
Sungroh Yoon
SILMMIACV
145
79
0
31 Jul 2018
Rob-GAN: Generator, Discriminator, and Adversarial Attacker
Rob-GAN: Generator, Discriminator, and Adversarial Attacker
Xuanqing Liu
Cho-Jui Hsieh
GAN
66
6
0
27 Jul 2018
Evaluating and Understanding the Robustness of Adversarial Logit Pairing
Evaluating and Understanding the Robustness of Adversarial Logit Pairing
Logan Engstrom
Andrew Ilyas
Anish Athalye
AAML
75
141
0
26 Jul 2018
Limitations of the Lipschitz constant as a defense against adversarial
  examples
Limitations of the Lipschitz constant as a defense against adversarial examples
Todd P. Huster
C. Chiang
R. Chadha
AAML
60
84
0
25 Jul 2018
Motivating the Rules of the Game for Adversarial Example Research
Motivating the Rules of the Game for Adversarial Example Research
Justin Gilmer
Ryan P. Adams
Ian Goodfellow
David G. Andersen
George E. Dahl
AAML
107
229
0
18 Jul 2018
Defend Deep Neural Networks Against Adversarial Examples via Fixed and
  Dynamic Quantized Activation Functions
Defend Deep Neural Networks Against Adversarial Examples via Fixed and Dynamic Quantized Activation Functions
Adnan Siraj Rakin
Jinfeng Yi
Boqing Gong
Deliang Fan
AAMLMQ
80
50
0
18 Jul 2018
Adaptive Adversarial Attack on Scene Text Recognition
Adaptive Adversarial Attack on Scene Text Recognition
Xiaoyong Yuan
Pan He
Xiaolin Li
Dapeng Oliver Wu
AAML
73
23
0
09 Jul 2018
Implicit Generative Modeling of Random Noise during Training for
  Adversarial Robustness
Implicit Generative Modeling of Random Noise during Training for Adversarial Robustness
Priyadarshini Panda
Kaushik Roy
AAML
50
4
0
05 Jul 2018
Local Gradients Smoothing: Defense against localized adversarial attacks
Local Gradients Smoothing: Defense against localized adversarial attacks
Muzammal Naseer
Salman H. Khan
Fatih Porikli
AAML
104
162
0
03 Jul 2018
Adversarial Robustness Toolbox v1.0.0
Adversarial Robustness Toolbox v1.0.0
Maria-Irina Nicolae
M. Sinn
Minh-Ngoc Tran
Beat Buesser
Ambrish Rawat
...
Nathalie Baracaldo
Bryant Chen
Heiko Ludwig
Ian Molloy
Ben Edwards
AAMLVLM
91
462
0
03 Jul 2018
Detection based Defense against Adversarial Examples from the
  Steganalysis Point of View
Detection based Defense against Adversarial Examples from the Steganalysis Point of View
Jiayang Liu
Weiming Zhang
Yiwei Zhang
Dongdong Hou
Yujia Liu
Hongyue Zha
Nenghai Yu
AAML
101
100
0
21 Jun 2018
Gradient Adversarial Training of Neural Networks
Gradient Adversarial Training of Neural Networks
Ayan Sinha
Zhao Chen
Vijay Badrinarayanan
Andrew Rabinovich
AAML
63
33
0
21 Jun 2018
Previous
123...36373839
Next