ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
v1v2v3v4 (latest)

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXiv (abs)PDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 1,929 papers shown
Title
FACM: Intermediate Layer Still Retain Effective Features against
  Adversarial Examples
FACM: Intermediate Layer Still Retain Effective Features against Adversarial Examples
Xiangyuan Yang
Jie Lin
Hanlin Zhang
Xinyu Yang
Peng Zhao
AAML
83
0
0
02 Jun 2022
Why Adversarial Training of ReLU Networks Is Difficult?
Why Adversarial Training of ReLU Networks Is Difficult?
Xu Cheng
Hao Zhang
Yue Xin
Wen Shen
Jie Ren
Quanshi Zhang
AAML
57
3
0
30 May 2022
Guided Diffusion Model for Adversarial Purification
Guided Diffusion Model for Adversarial Purification
Jinyi Wang
Zhaoyang Lyu
Dahua Lin
Bo Dai
Hongfei Fu
DiffM
279
90
0
30 May 2022
Robust Weight Perturbation for Adversarial Training
Robust Weight Perturbation for Adversarial Training
Chaojian Yu
Bo Han
Biwei Huang
Li Shen
Shiming Ge
Bo Du
Tongliang Liu
AAML
75
36
0
30 May 2022
Superclass Adversarial Attack
Superclass Adversarial Attack
Soichiro Kumano
Hiroshi Kera
T. Yamasaki
AAML
72
1
0
29 May 2022
Rethinking Bayesian Learning for Data Analysis: The Art of Prior and
  Inference in Sparsity-Aware Modeling
Rethinking Bayesian Learning for Data Analysis: The Art of Prior and Inference in Sparsity-Aware Modeling
Lei Cheng
Feng Yin
Sergios Theodoridis
S. Chatzis
Tsung-Hui Chang
128
78
0
28 May 2022
Certified Robustness Against Natural Language Attacks by Causal
  Intervention
Certified Robustness Against Natural Language Attacks by Causal Intervention
Haiteng Zhao
Chang Ma
Xinshuai Dong
Anh Tuan Luu
Zhi-Hong Deng
Hanwang Zhang
AAML
108
36
0
24 May 2022
EBM Life Cycle: MCMC Strategies for Synthesis, Defense, and Density
  Modeling
EBM Life Cycle: MCMC Strategies for Synthesis, Defense, and Density Modeling
Mitch Hill
Jonathan Mitchell
Chu Chen
Yuan Du
M. Shah
Song-Chun Zhu
36
0
0
24 May 2022
Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box
  Score-Based Query Attacks
Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks
Sizhe Chen
Zhehao Huang
Qinghua Tao
Yingwen Wu
Cihang Xie
Xiaolin Huang
AAML
199
28
0
24 May 2022
Alleviating Robust Overfitting of Adversarial Training With Consistency
  Regularization
Alleviating Robust Overfitting of Adversarial Training With Consistency Regularization
Shudong Zhang
Haichang Gao
Tianwei Zhang
Yunyi Zhou
Zihui Wu
AAML
82
4
0
24 May 2022
Post-breach Recovery: Protection against White-box Adversarial Examples
  for Leaked DNN Models
Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models
Shawn Shan
Wen-Luan Ding
Emily Wenger
Haitao Zheng
Ben Y. Zhao
AAML
77
12
0
21 May 2022
Improving Robustness against Real-World and Worst-Case Distribution
  Shifts through Decision Region Quantification
Improving Robustness against Real-World and Worst-Case Distribution Shifts through Decision Region Quantification
Leo Schwinn
Leon Bungert
A. Nguyen
René Raab
Falk Pulsmeyer
Doina Precup
Björn Eskofier
Dario Zanca
OOD
93
15
0
19 May 2022
On Trace of PGD-Like Adversarial Attacks
On Trace of PGD-Like Adversarial Attacks
Mo Zhou
Vishal M. Patel
AAML
75
4
0
19 May 2022
Empirical Advocacy of Bio-inspired Models for Robust Image Recognition
Empirical Advocacy of Bio-inspired Models for Robust Image Recognition
Harshitha Machiraju
Oh-hyeon Choung
Michael H. Herzog
P. Frossard
AAMLVLMOOD
53
6
0
18 May 2022
Lessons Learned: Defending Against Property Inference Attacks
Lessons Learned: Defending Against Property Inference Attacks
Joshua Stock
Jens Wettlaufer
Daniel Demmler
Hannes Federrath
AAML
95
1
0
18 May 2022
Robust Representation via Dynamic Feature Aggregation
Robust Representation via Dynamic Feature Aggregation
Haozhe Liu
Haoqin Ji
Yuexiang Li
Nanjun He
Haoqian Wu
Feng Liu
Linlin Shen
Yefeng Zheng
AAMLOOD
91
3
0
16 May 2022
Diffusion Models for Adversarial Purification
Diffusion Models for Adversarial Purification
Weili Nie
Brandon Guo
Yujia Huang
Chaowei Xiao
Arash Vahdat
Anima Anandkumar
WIGM
281
456
0
16 May 2022
AEON: A Method for Automatic Evaluation of NLP Test Cases
AEON: A Method for Automatic Evaluation of NLP Test Cases
Jen-tse Huang
Jianping Zhang
Wenxuan Wang
Pinjia He
Yuxin Su
Michael R. Lyu
83
23
0
13 May 2022
Smooth-Reduce: Leveraging Patches for Improved Certified Robustness
Smooth-Reduce: Leveraging Patches for Improved Certified Robustness
Ameya Joshi
Minh Pham
Minsu Cho
Leonid Boytsov
Filipe Condessa
J. Zico Kolter
Chinmay Hegde
UQCVAAML
70
2
0
12 May 2022
Do You Think You Can Hold Me? The Real Challenge of Problem-Space
  Evasion Attacks
Do You Think You Can Hold Me? The Real Challenge of Problem-Space Evasion Attacks
Harel Berger
A. Dvir
Chen Hajaj
Rony Ronen
AAML
66
3
0
09 May 2022
Subverting Fair Image Search with Generative Adversarial Perturbations
Subverting Fair Image Search with Generative Adversarial Perturbations
A. Ghosh
Matthew Jagielski
Chris L. Wilson
89
7
0
05 May 2022
CE-based white-box adversarial attacks will not work using super-fitting
CE-based white-box adversarial attacks will not work using super-fitting
Youhuan Yang
Lei Sun
Leyu Dai
Song Guo
Xiuqing Mao
Xiaoqin Wang
Bayi Xu
AAML
104
0
0
04 May 2022
Enhancing Adversarial Training with Feature Separability
Enhancing Adversarial Training with Feature Separability
Yaxin Li
Xiaorui Liu
Han Xu
Wentao Wang
Jiliang Tang
AAMLGAN
25
1
0
02 May 2022
DDDM: a Brain-Inspired Framework for Robust Classification
DDDM: a Brain-Inspired Framework for Robust Classification
Xiyuan Chen
Xingyu Li
Yi Zhou
Tianming Yang
AAMLDiffM
77
7
0
01 May 2022
Detecting Textual Adversarial Examples Based on Distributional
  Characteristics of Data Representations
Detecting Textual Adversarial Examples Based on Distributional Characteristics of Data Representations
Na Liu
Mark Dras
Wei Emma Zhang
AAML
51
6
0
29 Apr 2022
Defending Person Detection Against Adversarial Patch Attack by using
  Universal Defensive Frame
Defending Person Detection Against Adversarial Patch Attack by using Universal Defensive Frame
Youngjoon Yu
Hong Joo Lee
Hakmin Lee
Yong Man Ro
AAML
44
12
0
27 Apr 2022
When adversarial examples are excusable
When adversarial examples are excusable
Pieter-Jan Kindermans
Charles Staats
AAML
52
0
0
25 Apr 2022
How Sampling Impacts the Robustness of Stochastic Neural Networks
How Sampling Impacts the Robustness of Stochastic Neural Networks
Sina Daubener
Asja Fischer
SILMAAML
57
1
0
22 Apr 2022
Case-Aware Adversarial Training
Case-Aware Adversarial Training
Mingyuan Fan
Yang Liu
Ximeng Liu
AAML
47
1
0
20 Apr 2022
Sardino: Ultra-Fast Dynamic Ensemble for Secure Visual Sensing at Mobile
  Edge
Sardino: Ultra-Fast Dynamic Ensemble for Secure Visual Sensing at Mobile Edge
Qun Song
Zhenyu Yan
W. Luo
Rui Tan
AAML
46
2
0
18 Apr 2022
Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
  Learning
Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot Learning
Mathias Lechner
Alexander Amini
Daniela Rus
T. Henzinger
AAML
91
10
0
15 Apr 2022
Liuer Mihou: A Practical Framework for Generating and Evaluating
  Grey-box Adversarial Attacks against NIDS
Liuer Mihou: A Practical Framework for Generating and Evaluating Grey-box Adversarial Attacks against NIDS
Ke He
Dan Dongseong Kim
Jing Sun
J. Yoo
Young Hun Lee
H. Kim
AAML
39
5
0
12 Apr 2022
Toward Robust Spiking Neural Network Against Adversarial Perturbation
Toward Robust Spiking Neural Network Against Adversarial Perturbation
Ling Liang
Kaidi Xu
Xing Hu
Lei Deng
Yuan Xie
AAML
77
16
0
12 Apr 2022
Examining the Proximity of Adversarial Examples to Class Manifolds in
  Deep Networks
Examining the Proximity of Adversarial Examples to Class Manifolds in Deep Networks
Stefan Pócos
Iveta Becková
Igor Farkas
AAML
47
2
0
12 Apr 2022
3DeformRS: Certifying Spatial Deformations on Point Clouds
3DeformRS: Certifying Spatial Deformations on Point Clouds
S. GabrielPérez
Juan C. Pérez
Motasem Alfarra
Silvio Giancola
Guohao Li
3DPC
95
12
0
12 Apr 2022
A Simple Approach to Adversarial Robustness in Few-shot Image
  Classification
A Simple Approach to Adversarial Robustness in Few-shot Image Classification
Akshayvarun Subramanya
Hamed Pirsiavash
VLM
71
6
0
11 Apr 2022
Measuring the False Sense of Security
Measuring the False Sense of Security
Carlos Gomes
AAML
58
0
0
10 Apr 2022
Defense against Adversarial Attacks on Hybrid Speech Recognition using
  Joint Adversarial Fine-tuning with Denoiser
Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser
Sonal Joshi
Saurabh Kataria
Yiwen Shao
Piotr Żelasko
Jesus Villalba
Sanjeev Khudanpur
Najim Dehak
AAML
42
4
0
08 Apr 2022
Masking Adversarial Damage: Finding Adversarial Saliency for Robust and
  Sparse Network
Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network
Byung-Kwan Lee
Junho Kim
Y. Ro
AAML
59
20
0
06 Apr 2022
Recent improvements of ASR models in the face of adversarial attacks
Recent improvements of ASR models in the face of adversarial attacks
R. Olivier
Bhiksha Raj
AAML
126
14
0
29 Mar 2022
NICGSlowDown: Evaluating the Efficiency Robustness of Neural Image
  Caption Generation Models
NICGSlowDown: Evaluating the Efficiency Robustness of Neural Image Caption Generation Models
Simin Chen
Zihe Song
Mirazul Haque
Cong Liu
Wei Yang
75
42
0
29 Mar 2022
Robust Structured Declarative Classifiers for 3D Point Clouds: Defending
  Adversarial Attacks with Implicit Gradients
Robust Structured Declarative Classifiers for 3D Point Clouds: Defending Adversarial Attacks with Implicit Gradients
Kaidong Li
Ziming Zhang
Cuncong Zhong
Guanghui Wang
3DPC
78
25
0
29 Mar 2022
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization
  Perspective
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
Yimeng Zhang
Yuguang Yao
Jinghan Jia
Jinfeng Yi
Min-Fong Hong
Shiyu Chang
Sijia Liu
AAML
129
34
0
27 Mar 2022
A Survey of Robust Adversarial Training in Pattern Recognition:
  Fundamental, Theory, and Methodologies
A Survey of Robust Adversarial Training in Pattern Recognition: Fundamental, Theory, and Methodologies
Zhuang Qian
Kaizhu Huang
Qiufeng Wang
Xu-Yao Zhang
OODAAMLObjD
128
73
0
26 Mar 2022
Enhancing Classifier Conservativeness and Robustness by Polynomiality
Enhancing Classifier Conservativeness and Robustness by Polynomiality
Ziqi Wang
Marco Loog
AAML
46
3
0
23 Mar 2022
Adversarial Parameter Attack on Deep Neural Networks
Adversarial Parameter Attack on Deep Neural Networks
Lijia Yu
Yihan Wang
Xiao-Shan Gao
AAML
76
9
0
20 Mar 2022
Adversarial Defense via Image Denoising with Chaotic Encryption
Adversarial Defense via Image Denoising with Chaotic Encryption
Shi Hu
Eric T. Nalisnick
Max Welling
54
2
0
19 Mar 2022
Alleviating Adversarial Attacks on Variational Autoencoders with MCMC
Alleviating Adversarial Attacks on Variational Autoencoders with MCMC
Anna Kuzina
Max Welling
Jakub M. Tomczak
AAMLDRL
102
12
0
18 Mar 2022
Towards Robust 2D Convolution for Reliable Visual Recognition
Towards Robust 2D Convolution for Reliable Visual Recognition
Lida Li
Shuai Li
Kun Wang
Xiangchu Feng
Lei Zhang
36
1
0
18 Mar 2022
On the Properties of Adversarially-Trained CNNs
On the Properties of Adversarially-Trained CNNs
Mattia Carletti
M. Terzi
Gian Antonio Susto
AAML
66
1
0
17 Mar 2022
Previous
123...121314...373839
Next