ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXivPDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 733 papers shown
Title
Enhancing Certifiable Robustness via a Deep Model Ensemble
Enhancing Certifiable Robustness via a Deep Model Ensemble
Huan Zhang
Minhao Cheng
Cho-Jui Hsieh
33
9
0
31 Oct 2019
A Unified Framework for Data Poisoning Attack to Graph-based
  Semi-supervised Learning
A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning
Xuanqing Liu
Si Si
Xiaojin Zhu
Yang Li
Cho-Jui Hsieh
AAML
35
78
0
30 Oct 2019
Understanding and Quantifying Adversarial Examples Existence in Linear
  Classification
Understanding and Quantifying Adversarial Examples Existence in Linear Classification
Xupeng Shi
A. Ding
AAML
16
3
0
27 Oct 2019
Detection of Adversarial Attacks and Characterization of Adversarial
  Subspace
Detection of Adversarial Attacks and Characterization of Adversarial Subspace
Mohammad Esmaeilpour
P. Cardinal
Alessandro Lameiras Koerich
AAML
27
17
0
26 Oct 2019
Effectiveness of random deep feature selection for securing image
  manipulation detectors against adversarial examples
Effectiveness of random deep feature selection for securing image manipulation detectors against adversarial examples
Mauro Barni
Ehsan Nowroozi
B. Tondi
Bowen Zhang
AAML
13
17
0
25 Oct 2019
Label Smoothing and Logit Squeezing: A Replacement for Adversarial
  Training?
Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?
Ali Shafahi
Amin Ghiasi
Furong Huang
Tom Goldstein
AAML
27
39
0
25 Oct 2019
Adversarial Example Detection by Classification for Deep Speech
  Recognition
Adversarial Example Detection by Classification for Deep Speech Recognition
Saeid Samizade
Zheng-Hua Tan
Chao Shen
X. Guan
AAML
18
35
0
22 Oct 2019
An Alternative Surrogate Loss for PGD-based Adversarial Testing
An Alternative Surrogate Loss for PGD-based Adversarial Testing
Sven Gowal
J. Uesato
Chongli Qin
Po-Sen Huang
Timothy A. Mann
Pushmeet Kohli
AAML
50
89
0
21 Oct 2019
Adversarial T-shirt! Evading Person Detectors in A Physical World
Adversarial T-shirt! Evading Person Detectors in A Physical World
Kaidi Xu
Gaoyuan Zhang
Sijia Liu
Quanfu Fan
Mengshu Sun
Hongge Chen
Pin-Yu Chen
Yanzhi Wang
Xue Lin
AAML
16
30
0
18 Oct 2019
A New Defense Against Adversarial Images: Turning a Weakness into a
  Strength
A New Defense Against Adversarial Images: Turning a Weakness into a Strength
Tao Yu
Shengyuan Hu
Chuan Guo
Wei-Lun Chao
Kilian Q. Weinberger
AAML
58
101
0
16 Oct 2019
Perturbations are not Enough: Generating Adversarial Examples with
  Spatial Distortions
Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions
He Zhao
Trung Le
Paul Montague
O. Vel
Tamas Abraham
Dinh Q. Phung
AAML
28
8
0
03 Oct 2019
Deep Neural Rejection against Adversarial Examples
Deep Neural Rejection against Adversarial Examples
Angelo Sotgiu
Ambra Demontis
Marco Melis
Battista Biggio
Giorgio Fumera
Xiaoyi Feng
Fabio Roli
AAML
22
68
0
01 Oct 2019
Role of Spatial Context in Adversarial Robustness for Object Detection
Role of Spatial Context in Adversarial Robustness for Object Detection
Aniruddha Saha
Akshayvarun Subramanya
Koninika Patil
Hamed Pirsiavash
ObjD
AAML
32
53
0
30 Sep 2019
Test-Time Training with Self-Supervision for Generalization under
  Distribution Shifts
Test-Time Training with Self-Supervision for Generalization under Distribution Shifts
Yu Sun
Xiaolong Wang
Zhuang Liu
John Miller
Alexei A. Efros
Moritz Hardt
TTA
OOD
27
92
0
29 Sep 2019
Impact of Low-bitwidth Quantization on the Adversarial Robustness for
  Embedded Neural Networks
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks
Rémi Bernhard
Pierre-Alain Moëllic
J. Dutertre
AAML
MQ
24
18
0
27 Sep 2019
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
Tianyu Pang
Kun Xu
Jun Zhu
AAML
28
103
0
25 Sep 2019
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
Minhao Cheng
Simranjit Singh
Patrick H. Chen
Pin-Yu Chen
Sijia Liu
Cho-Jui Hsieh
AAML
134
219
0
24 Sep 2019
Defending Against Physically Realizable Attacks on Image Classification
Defending Against Physically Realizable Attacks on Image Classification
Tong Wu
Liang Tong
Yevgeniy Vorobeychik
AAML
25
125
0
20 Sep 2019
Absum: Simple Regularization Method for Reducing Structural Sensitivity
  of Convolutional Neural Networks
Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Sekitoshi Kanai
Yasutoshi Ida
Yasuhiro Fujiwara
Masanori Yamada
S. Adachi
AAML
23
1
0
19 Sep 2019
Sparse and Imperceivable Adversarial Attacks
Sparse and Imperceivable Adversarial Attacks
Francesco Croce
Matthias Hein
AAML
39
199
0
11 Sep 2019
Protecting Neural Networks with Hierarchical Random Switching: Towards
  Better Robustness-Accuracy Trade-off for Stochastic Defenses
Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses
Tianlin Li
Siyue Wang
Pin-Yu Chen
Yanzhi Wang
Brian Kulis
Xue Lin
S. Chin
AAML
16
42
0
20 Aug 2019
Implicit Deep Learning
Implicit Deep Learning
L. Ghaoui
Fangda Gu
Bertrand Travacca
Armin Askari
Alicia Y. Tsai
AI4CE
34
176
0
17 Aug 2019
BlurNet: Defense by Filtering the Feature Maps
BlurNet: Defense by Filtering the Feature Maps
Ravi Raju
Mikko H. Lipasti
AAML
42
15
0
06 Aug 2019
Defense Against Adversarial Attacks Using Feature Scattering-based
  Adversarial Training
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
Haichao Zhang
Jianyu Wang
AAML
23
230
0
24 Jul 2019
Towards Adversarially Robust Object Detection
Towards Adversarially Robust Object Detection
Haichao Zhang
Jianyu Wang
AAML
ObjD
23
130
0
24 Jul 2019
Structure-Invariant Testing for Machine Translation
Structure-Invariant Testing for Machine Translation
Pinjia He
Clara Meister
Z. Su
27
104
0
19 Jul 2019
Detecting and Diagnosing Adversarial Images with Class-Conditional
  Capsule Reconstructions
Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions
Yao Qin
Nicholas Frosst
S. Sabour
Colin Raffel
G. Cottrell
Geoffrey E. Hinton
GAN
AAML
19
71
0
05 Jul 2019
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary
  Attack
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
Francesco Croce
Matthias Hein
AAML
43
475
0
03 Jul 2019
Accurate, reliable and fast robustness evaluation
Accurate, reliable and fast robustness evaluation
Wieland Brendel
Jonas Rauber
Matthias Kümmerer
Ivan Ustyuzhaninov
Matthias Bethge
AAML
OOD
13
113
0
01 Jul 2019
Evolving Robust Neural Architectures to Defend from Adversarial Attacks
Evolving Robust Neural Architectures to Defend from Adversarial Attacks
Shashank Kotyan
Danilo Vasconcellos Vargas
OOD
AAML
24
36
0
27 Jun 2019
Defending Adversarial Attacks by Correcting logits
Defending Adversarial Attacks by Correcting logits
Yifeng Li
Lingxi Xie
Ya Zhang
Rui Zhang
Yanfeng Wang
Qi Tian
AAML
29
5
0
26 Jun 2019
Quantitative Verification of Neural Networks And its Security
  Applications
Quantitative Verification of Neural Networks And its Security Applications
Teodora Baluta
Shiqi Shen
Shweta Shinde
Kuldeep S. Meel
P. Saxena
AAML
21
104
0
25 Jun 2019
Defending Against Adversarial Examples with K-Nearest Neighbor
Chawin Sitawarin
David Wagner
AAML
11
29
0
23 Jun 2019
Defending Against Adversarial Attacks Using Random Forests
Defending Against Adversarial Attacks Using Random Forests
Yifan Ding
Liqiang Wang
Huan Zhang
Jinfeng Yi
Deliang Fan
Boqing Gong
AAML
21
14
0
16 Jun 2019
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
Kaifeng Lyu
Jian Li
52
324
0
13 Jun 2019
A Computationally Efficient Method for Defending Adversarial Deep
  Learning Attacks
A Computationally Efficient Method for Defending Adversarial Deep Learning Attacks
R. Sahay
Rehana Mahfuz
Aly El Gamal
AAML
22
5
0
13 Jun 2019
E-LPIPS: Robust Perceptual Image Similarity via Random Transformation
  Ensembles
E-LPIPS: Robust Perceptual Image Similarity via Random Transformation Ensembles
M. Kettunen
Erik Härkönen
J. Lehtinen
AAML
32
61
0
10 Jun 2019
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual
  Perspective
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective
Lu Wang
Xuanqing Liu
Jinfeng Yi
Zhi-Hua Zhou
Cho-Jui Hsieh
AAML
28
22
0
10 Jun 2019
Robustness Verification of Tree-based Models
Robustness Verification of Tree-based Models
Hongge Chen
Huan Zhang
Si Si
Yang Li
Duane S. Boning
Cho-Jui Hsieh
AAML
17
76
0
10 Jun 2019
Improving Neural Language Modeling via Adversarial Training
Improving Neural Language Modeling via Adversarial Training
Dilin Wang
Chengyue Gong
Qiang Liu
AAML
43
115
0
10 Jun 2019
Provably Robust Deep Learning via Adversarially Trained Smoothed
  Classifiers
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
Hadi Salman
Greg Yang
Jungshian Li
Pengchuan Zhang
Huan Zhang
Ilya P. Razenshteyn
Sébastien Bubeck
AAML
39
536
0
09 Jun 2019
Adversarial Attack Generation Empowered by Min-Max Optimization
Adversarial Attack Generation Empowered by Min-Max Optimization
Jingkang Wang
Tianyun Zhang
Sijia Liu
Pin-Yu Chen
Jiacen Xu
M. Fardad
Bohao Li
AAML
25
35
0
09 Jun 2019
Provably Robust Boosted Decision Stumps and Trees against Adversarial
  Attacks
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
Maksym Andriushchenko
Matthias Hein
25
61
0
08 Jun 2019
ML-LOO: Detecting Adversarial Examples with Feature Attribution
ML-LOO: Detecting Adversarial Examples with Feature Attribution
Puyudi Yang
Jianbo Chen
Cho-Jui Hsieh
Jane-ling Wang
Michael I. Jordan
AAML
22
101
0
08 Jun 2019
Robustness for Non-Parametric Classification: A Generic Attack and
  Defense
Robustness for Non-Parametric Classification: A Generic Attack and Defense
Yao-Yuan Yang
Cyrus Rashtchian
Yizhen Wang
Kamalika Chaudhuri
SILM
AAML
34
42
0
07 Jun 2019
Enhancing Gradient-based Attacks with Symbolic Intervals
Enhancing Gradient-based Attacks with Symbolic Intervals
Shiqi Wang
Yizheng Chen
Ahmed Abdou
Suman Jana
AAML
28
15
0
05 Jun 2019
Multi-way Encoding for Robustness
Multi-way Encoding for Robustness
Donghyun Kim
Sarah Adel Bargal
Jianming Zhang
Stan Sclaroff
AAML
18
2
0
05 Jun 2019
Enhancing Transformation-based Defenses using a Distribution Classifier
Enhancing Transformation-based Defenses using a Distribution Classifier
C. Kou
H. Lee
E. Chang
Teck Khim Ng
37
3
0
01 Jun 2019
Bypassing Backdoor Detection Algorithms in Deep Learning
Bypassing Backdoor Detection Algorithms in Deep Learning
T. Tan
Reza Shokri
FedML
AAML
39
149
0
31 May 2019
Robust Sparse Regularization: Simultaneously Optimizing Neural Network
  Robustness and Compactness
Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness
Adnan Siraj Rakin
Zhezhi He
Li Yang
Yanzhi Wang
Liqiang Wang
Deliang Fan
AAML
40
21
0
30 May 2019
Previous
123...1112131415
Next