ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
v1v2v3v4 (latest)

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXiv (abs)PDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 1,929 papers shown
Title
Towards Adversarially Robust Object Detection
Towards Adversarially Robust Object Detection
Haichao Zhang
Jianyu Wang
AAMLObjD
139
131
0
24 Jul 2019
Enhancing Adversarial Example Transferability with an Intermediate Level
  Attack
Enhancing Adversarial Example Transferability with an Intermediate Level Attack
Qian Huang
Isay Katsman
Horace He
Zeqi Gu
Serge J. Belongie
Ser-Nam Lim
SILMAAML
106
248
0
23 Jul 2019
Structure-Invariant Testing for Machine Translation
Structure-Invariant Testing for Machine Translation
Pinjia He
Clara Meister
Z. Su
75
106
0
19 Jul 2019
Connecting Lyapunov Control Theory to Adversarial Attacks
Connecting Lyapunov Control Theory to Adversarial Attacks
Arash Rahnama
A. Nguyen
Edward Raff
AAML
29
6
0
17 Jul 2019
Adversarial Security Attacks and Perturbations on Machine Learning and
  Deep Learning Methods
Adversarial Security Attacks and Perturbations on Machine Learning and Deep Learning Methods
Arif Siddiqi
AAML
64
11
0
17 Jul 2019
Adversarial Sensor Attack on LiDAR-based Perception in Autonomous
  Driving
Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving
Yulong Cao
Chaowei Xiao
Benjamin Cyr
Yimeng Zhou
Wonseok Park
Sara Rampazzi
Qi Alfred Chen
Kevin Fu
Z. Morley Mao
AAML
63
544
0
16 Jul 2019
Graph Interpolating Activation Improves Both Natural and Robust
  Accuracies in Data-Efficient Deep Learning
Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning
Bao Wang
Stanley J. Osher
AAMLAI4CE
77
10
0
16 Jul 2019
Stateful Detection of Black-Box Adversarial Attacks
Stateful Detection of Black-Box Adversarial Attacks
Steven Chen
Nicholas Carlini
D. Wagner
AAMLMLAU
69
126
0
12 Jul 2019
Forecasting remaining useful life: Interpretable deep learning approach
  via variational Bayesian inferences
Forecasting remaining useful life: Interpretable deep learning approach via variational Bayesian inferences
Mathias Kraus
Stefan Feuerriegel
64
110
0
11 Jul 2019
Why Blocking Targeted Adversarial Perturbations Impairs the Ability to
  Learn
Why Blocking Targeted Adversarial Perturbations Impairs the Ability to Learn
Ziv Katzir
Yuval Elovici
AAML
20
3
0
11 Jul 2019
Detecting and Diagnosing Adversarial Images with Class-Conditional
  Capsule Reconstructions
Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions
Yao Qin
Nicholas Frosst
S. Sabour
Colin Raffel
G. Cottrell
Geoffrey E. Hinton
GANAAML
83
72
0
05 Jul 2019
Adversarial Robustness through Local Linearization
Adversarial Robustness through Local Linearization
Chongli Qin
James Martens
Sven Gowal
Dilip Krishnan
Krishnamurthy Dvijotham
Alhussein Fawzi
Soham De
Robert Stanforth
Pushmeet Kohli
AAML
94
308
0
04 Jul 2019
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary
  Attack
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
Francesco Croce
Matthias Hein
AAML
149
493
0
03 Jul 2019
Diminishing the Effect of Adversarial Perturbations via Refining Feature
  Representation
Diminishing the Effect of Adversarial Perturbations via Refining Feature Representation
Nader Asadi
Amirm. Sarfi
Mehrdad Hosseinzadeh
Sahba Tahsini
M. Eftekhari
AAML
32
2
0
01 Jul 2019
Accurate, reliable and fast robustness evaluation
Accurate, reliable and fast robustness evaluation
Wieland Brendel
Jonas Rauber
Matthias Kümmerer
Ivan Ustyuzhaninov
Matthias Bethge
AAMLOOD
97
113
0
01 Jul 2019
Using Self-Supervised Learning Can Improve Model Robustness and
  Uncertainty
Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty
Dan Hendrycks
Mantas Mazeika
Saurav Kadavath
Basel Alomair
OODSSL
69
953
0
28 Jun 2019
Using Intuition from Empirical Properties to Simplify Adversarial
  Training Defense
Using Intuition from Empirical Properties to Simplify Adversarial Training Defense
Guanxiong Liu
Issa M. Khalil
Abdallah Khreishah
AAML
37
2
0
27 Jun 2019
Evolving Robust Neural Architectures to Defend from Adversarial Attacks
Evolving Robust Neural Architectures to Defend from Adversarial Attacks
Shashank Kotyan
Danilo Vasconcellos Vargas
OODAAML
81
36
0
27 Jun 2019
Defending Adversarial Attacks by Correcting logits
Defending Adversarial Attacks by Correcting logits
Yifeng Li
Lingxi Xie
Ya Zhang
Rui Zhang
Yanfeng Wang
Qi Tian
AAML
41
5
0
26 Jun 2019
Quantitative Verification of Neural Networks And its Security
  Applications
Quantitative Verification of Neural Networks And its Security Applications
Teodora Baluta
Shiqi Shen
Shweta Shinde
Kuldeep S. Meel
P. Saxena
AAML
89
105
0
25 Jun 2019
Defending Against Adversarial Examples with K-Nearest Neighbor
Chawin Sitawarin
David Wagner
AAML
91
29
0
23 Jun 2019
Improving the robustness of ImageNet classifiers using elements of human
  visual cognition
Improving the robustness of ImageNet classifiers using elements of human visual cognition
A. Orhan
Brenden M. Lake
VLM
60
5
0
20 Jun 2019
Convergence of Adversarial Training in Overparametrized Neural Networks
Convergence of Adversarial Training in Overparametrized Neural Networks
Ruiqi Gao
Tianle Cai
Haochuan Li
Liwei Wang
Cho-Jui Hsieh
Jason D. Lee
AAML
113
109
0
19 Jun 2019
Improving Black-box Adversarial Attacks with a Transfer-based Prior
Improving Black-box Adversarial Attacks with a Transfer-based Prior
Shuyu Cheng
Yinpeng Dong
Tianyu Pang
Hang Su
Jun Zhu
AAML
94
274
0
17 Jun 2019
MixUp as Directional Adversarial Training
MixUp as Directional Adversarial Training
Guillaume P. Archambault
Yongyi Mao
Hongyu Guo
Richong Zhang
AAML
58
23
0
17 Jun 2019
Interpolated Adversarial Training: Achieving Robust Neural Networks
  without Sacrificing Too Much Accuracy
Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing Too Much Accuracy
Alex Lamb
Vikas Verma
Kenji Kawaguchi
Alexander Matyasko
Savya Khosla
Arno Solin
Yoshua Bengio
AAML
74
99
0
16 Jun 2019
Defending Against Adversarial Attacks Using Random Forests
Defending Against Adversarial Attacks Using Random Forests
Yifan Ding
Liqiang Wang
Huan Zhang
Jinfeng Yi
Deliang Fan
Boqing Gong
AAML
64
14
0
16 Jun 2019
Representation Quality Of Neural Networks Links To Adversarial Attacks
  and Defences
Representation Quality Of Neural Networks Links To Adversarial Attacks and Defences
Shashank Kotyan
Danilo Vasconcellos Vargas
Moe Matsuki
39
0
0
15 Jun 2019
Towards Stable and Efficient Training of Verifiably Robust Neural
  Networks
Towards Stable and Efficient Training of Verifiably Robust Neural Networks
Huan Zhang
Hongge Chen
Chaowei Xiao
Sven Gowal
Robert Stanforth
Yue Liu
Duane S. Boning
Cho-Jui Hsieh
AAML
109
351
0
14 Jun 2019
Towards Compact and Robust Deep Neural Networks
Towards Compact and Robust Deep Neural Networks
Vikash Sehwag
Shiqi Wang
Prateek Mittal
Suman Jana
AAML
82
40
0
14 Jun 2019
Adversarial Robustness Assessment: Why both $L_0$ and $L_\infty$ Attacks
  Are Necessary
Adversarial Robustness Assessment: Why both L0L_0L0​ and L∞L_\inftyL∞​ Attacks Are Necessary
Shashank Kotyan
Danilo Vasconcellos Vargas
AAML
34
8
0
14 Jun 2019
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
Kaifeng Lyu
Jian Li
114
336
0
13 Jun 2019
A Computationally Efficient Method for Defending Adversarial Deep
  Learning Attacks
A Computationally Efficient Method for Defending Adversarial Deep Learning Attacks
R. Sahay
Rehana Mahfuz
Aly El Gamal
AAML
33
5
0
13 Jun 2019
E-LPIPS: Robust Perceptual Image Similarity via Random Transformation
  Ensembles
E-LPIPS: Robust Perceptual Image Similarity via Random Transformation Ensembles
M. Kettunen
Erik Härkönen
J. Lehtinen
AAML
65
63
0
10 Jun 2019
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual
  Perspective
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective
Lu Wang
Xuanqing Liu
Jinfeng Yi
Zhi Zhou
Cho-Jui Hsieh
AAML
83
22
0
10 Jun 2019
Robustness Verification of Tree-based Models
Robustness Verification of Tree-based Models
Hongge Chen
Huan Zhang
Si Si
Yang Li
Duane S. Boning
Cho-Jui Hsieh
AAML
103
77
0
10 Jun 2019
Improving Neural Language Modeling via Adversarial Training
Improving Neural Language Modeling via Adversarial Training
Dilin Wang
Chengyue Gong
Qiang Liu
AAML
122
119
0
10 Jun 2019
Improved Adversarial Robustness via Logit Regularization Methods
Improved Adversarial Robustness via Logit Regularization Methods
Cecilia Summers
M. Dinneen
AAML
59
7
0
10 Jun 2019
Provably Robust Deep Learning via Adversarially Trained Smoothed
  Classifiers
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
Hadi Salman
Greg Yang
Jungshian Li
Pengchuan Zhang
Huan Zhang
Ilya P. Razenshteyn
Sébastien Bubeck
AAML
134
552
0
09 Jun 2019
Adversarial Attack Generation Empowered by Min-Max Optimization
Adversarial Attack Generation Empowered by Min-Max Optimization
Jingkang Wang
Tianyun Zhang
Sijia Liu
Pin-Yu Chen
Jiacen Xu
M. Fardad
Yangqiu Song
AAML
83
37
0
09 Jun 2019
Provably Robust Boosted Decision Stumps and Trees against Adversarial
  Attacks
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
Maksym Andriushchenko
Matthias Hein
84
62
0
08 Jun 2019
ML-LOO: Detecting Adversarial Examples with Feature Attribution
ML-LOO: Detecting Adversarial Examples with Feature Attribution
Puyudi Yang
Jianbo Chen
Cho-Jui Hsieh
Jane-ling Wang
Michael I. Jordan
AAML
93
101
0
08 Jun 2019
Making targeted black-box evasion attacks effective and efficient
Making targeted black-box evasion attacks effective and efficient
Mika Juuti
B. Atli
Nadarajah Asokan
AAMLMIACVMLAU
49
8
0
08 Jun 2019
Robustness for Non-Parametric Classification: A Generic Attack and
  Defense
Robustness for Non-Parametric Classification: A Generic Attack and Defense
Yao-Yuan Yang
Cyrus Rashtchian
Yizhen Wang
Kamalika Chaudhuri
SILMAAML
92
43
0
07 Jun 2019
A cryptographic approach to black box adversarial machine learning
A cryptographic approach to black box adversarial machine learning
Kevin Shi
Daniel J. Hsu
Allison Bishop
AAML
18
3
0
07 Jun 2019
Inductive Bias of Gradient Descent based Adversarial Training on
  Separable Data
Inductive Bias of Gradient Descent based Adversarial Training on Separable Data
Yan Li
Ethan X. Fang
Huan Xu
T. Zhao
92
16
0
07 Jun 2019
Adversarial Explanations for Understanding Image Classification
  Decisions and Improved Neural Network Robustness
Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness
Walt Woods
Jack H Chen
C. Teuscher
AAML
66
46
0
07 Jun 2019
Should Adversarial Attacks Use Pixel p-Norm?
Should Adversarial Attacks Use Pixel p-Norm?
Ayon Sen
Xiaojin Zhu
Liam Marshall
Robert D. Nowak
51
21
0
06 Jun 2019
MNIST-C: A Robustness Benchmark for Computer Vision
MNIST-C: A Robustness Benchmark for Computer Vision
Norman Mu
Justin Gilmer
75
214
0
05 Jun 2019
Enhancing Gradient-based Attacks with Symbolic Intervals
Enhancing Gradient-based Attacks with Symbolic Intervals
Shiqi Wang
Yizheng Chen
Ahmed Abdou
Suman Jana
AAML
58
15
0
05 Jun 2019
Previous
123...313233...373839
Next