ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
v1v2v3v4 (latest)

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXiv (abs)PDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 1,929 papers shown
Title
Pyramid Adversarial Training Improves ViT Performance
Pyramid Adversarial Training Improves ViT Performance
Charles Herrmann
Kyle Sargent
Lu Jiang
Ramin Zabih
Huiwen Chang
Ce Liu
Dilip Krishnan
Deqing Sun
ViT
118
59
0
30 Nov 2021
Adaptive Image Transformations for Transfer-based Adversarial Attack
Adaptive Image Transformations for Transfer-based Adversarial Attack
Zheng Yuan
Jie Zhang
Shiguang Shan
OOD
89
27
0
27 Nov 2021
Clustering Effect of (Linearized) Adversarial Robust Models
Clustering Effect of (Linearized) Adversarial Robust Models
Yang Bai
Xin Yan
Yong Jiang
Shutao Xia
Yisen Wang
OODAAML
79
5
0
25 Nov 2021
Robustness against Adversarial Attacks in Neural Networks using
  Incremental Dissipativity
Robustness against Adversarial Attacks in Neural Networks using Incremental Dissipativity
B. Aquino
Arash Rahnama
Peter M. Seiler
Lizhen Lin
Vijay Gupta
AAML
58
8
0
25 Nov 2021
Unity is strength: Improving the Detection of Adversarial Examples with
  Ensemble Approaches
Unity is strength: Improving the Detection of Adversarial Examples with Ensemble Approaches
Francesco Craighero
Fabrizio Angaroni
Fabio Stella
Chiara Damiani
M. Antoniotti
Alex Graudenzi
AAML
72
8
0
24 Nov 2021
Subspace Adversarial Training
Subspace Adversarial Training
Tao Li
Yingwen Wu
Sizhe Chen
Kun Fang
Xiaolin Huang
AAMLOOD
108
59
0
24 Nov 2021
Medical Aegis: Robust adversarial protectors for medical images
Medical Aegis: Robust adversarial protectors for medical images
Qingsong Yao
Zecheng He
S. Kevin Zhou
AAMLMedIm
69
2
0
22 Nov 2021
Denoised Internal Models: a Brain-Inspired Autoencoder against
  Adversarial Attacks
Denoised Internal Models: a Brain-Inspired Autoencoder against Adversarial Attacks
Kaiyuan Liu
Xingyu Li
Yu-Rui Lai
Hong Xie
Hang Su
Jiacheng Wang
Chunxu Guo
J. Guan
Yi Zhou
AAML
89
4
0
21 Nov 2021
Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the
  Adversarial Transferability
Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability
Yifeng Xiong
Jiadong Lin
Min Zhang
John E. Hopcroft
Kun He
AAML
126
115
0
21 Nov 2021
Fooling Adversarial Training with Inducing Noise
Fooling Adversarial Training with Inducing Noise
Zhirui Wang
Yifei Wang
Yisen Wang
78
14
0
19 Nov 2021
Towards Efficiently Evaluating the Robustness of Deep Neural Networks in
  IoT Systems: A GAN-based Method
Towards Efficiently Evaluating the Robustness of Deep Neural Networks in IoT Systems: A GAN-based Method
Tao Bai
Jun Zhao
Jinlin Zhu
Shoudong Han
Jiefeng Chen
Yue Liu
Alex C. Kot
AAML
46
5
0
19 Nov 2021
A Review of Adversarial Attack and Defense for Classification Methods
A Review of Adversarial Attack and Defense for Classification Methods
Yao Li
Minhao Cheng
Cho-Jui Hsieh
T. C. Lee
AAML
76
69
0
18 Nov 2021
SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
  Certified Robustness
SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness
Jongheon Jeong
Sejun Park
Minkyu Kim
Heung-Chang Lee
Do-Guk Kim
Jinwoo Shin
AAML
85
57
0
17 Nov 2021
Neural Population Geometry Reveals the Role of Stochasticity in Robust
  Perception
Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception
Joel Dapello
J. Feather
Hang Le
Tiago Marques
David D. Cox
Josh H. McDermott
J. DiCarlo
SueYeon Chung
AAMLOOD
68
25
0
12 Nov 2021
Data Augmentation Can Improve Robustness
Data Augmentation Can Improve Robustness
Sylvestre-Alvise Rebuffi
Sven Gowal
D. A. Calian
Florian Stimberg
Olivia Wiles
Timothy A. Mann
AAML
65
295
0
09 Nov 2021
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated
  Channel Maps
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps
Muhammad Awais
Fengwei Zhou
Chuanlong Xie
Jiawei Li
Sung-Ho Bae
Zhenguo Li
AAML
87
18
0
09 Nov 2021
Tightening the Approximation Error of Adversarial Risk with Auto Loss
  Function Search
Tightening the Approximation Error of Adversarial Risk with Auto Loss Function Search
Pengfei Xia
Ziqiang Li
Bin Li
AAML
121
3
0
09 Nov 2021
Robust and Information-theoretically Safe Bias Classifier against
  Adversarial Attacks
Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks
Lijia Yu
Xiao-Shan Gao
AAML
116
5
0
08 Nov 2021
Sequential Randomized Smoothing for Adversarially Robust Speech
  Recognition
Sequential Randomized Smoothing for Adversarially Robust Speech Recognition
R. Olivier
Bhiksha Raj
AAML
133
11
0
05 Nov 2021
LTD: Low Temperature Distillation for Robust Adversarial Training
LTD: Low Temperature Distillation for Robust Adversarial Training
Erh-Chung Chen
Che-Rung Lee
AAML
127
27
0
03 Nov 2021
Recent Advancements in Self-Supervised Paradigms for Visual Feature
  Representation
Recent Advancements in Self-Supervised Paradigms for Visual Feature Representation
Mrinal Anand
Aditya Garg
SSL
46
2
0
03 Nov 2021
Meta-Learning the Search Distribution of Black-Box Random Search Based
  Adversarial Attacks
Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks
Maksym Yatsura
J. H. Metzen
Matthias Hein
OOD
102
14
0
02 Nov 2021
Training Certifiably Robust Neural Networks with Efficient Local
  Lipschitz Bounds
Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds
Yujia Huang
Huan Zhang
Yuanyuan Shi
J Zico Kolter
Anima Anandkumar
105
78
0
02 Nov 2021
When Does Contrastive Learning Preserve Adversarial Robustness from
  Pretraining to Finetuning?
When Does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?
Lijie Fan
Sijia Liu
Pin-Yu Chen
Gaoyuan Zhang
Chuang Gan
AAMLVLM
95
125
0
01 Nov 2021
Holistic Deep Learning
Holistic Deep Learning
Dimitris Bertsimas
Kimberly Villalobos Carballo
L. Boussioux
M. Li
Alex Paskov
I. Paskov
83
3
0
29 Oct 2021
Adversarial Robustness with Semi-Infinite Constrained Learning
Adversarial Robustness with Semi-Infinite Constrained Learning
Alexander Robey
Luiz F. O. Chamon
George J. Pappas
Hamed Hassani
Alejandro Ribeiro
AAMLOOD
184
46
0
29 Oct 2021
ε-weakened Robustness of Deep Neural Networks
ε-weakened Robustness of Deep Neural Networks
Pei Huang
Yuting Yang
Minghao Liu
Fuqi Jia
Feifei Ma
Jian Zhang
AAML
71
18
0
29 Oct 2021
CAP: Co-Adversarial Perturbation on Weights and Features for Improving
  Generalization of Graph Neural Networks
CAP: Co-Adversarial Perturbation on Weights and Features for Improving Generalization of Graph Neural Networks
Hao Xue
Kaixiong Zhou
Tianlong Chen
Kai Guo
Helen Zhou
Yi Chang
Xin Wang
AAML
76
15
0
28 Oct 2021
Towards Evaluating the Robustness of Neural Networks Learned by
  Transduction
Towards Evaluating the Robustness of Neural Networks Learned by Transduction
Jiefeng Chen
Xi Wu
Yang Guo
Yingyu Liang
S. Jha
ELMAAML
92
15
0
27 Oct 2021
Towards Robust Reasoning over Knowledge Graphs
Towards Robust Reasoning over Knowledge Graphs
Zhaohan Xi
Ren Pang
Changjiang Li
S. Ji
Xiapu Luo
Xusheng Xiao
Ting Wang
36
0
0
27 Oct 2021
Improving Local Effectiveness for Global robust training
Improving Local Effectiveness for Global robust training
Jingyue Lu
M. P. Kumar
AAML
49
0
0
26 Oct 2021
Adversarial Attacks and Defenses for Social Network Text Processing
  Applications: Techniques, Challenges and Future Research Directions
Adversarial Attacks and Defenses for Social Network Text Processing Applications: Techniques, Challenges and Future Research Directions
I. Alsmadi
Kashif Ahmad
Mahmoud Nazzal
Firoj Alam
Ala I. Al-Fuqaha
Abdallah Khreishah
A. Algosaibi
AAML
64
16
0
26 Oct 2021
A Frequency Perspective of Adversarial Robustness
A Frequency Perspective of Adversarial Robustness
Shishira R. Maiya
Max Ehrlich
Vatsal Agarwal
Ser-Nam Lim
Tom Goldstein
Abhinav Shrivastava
AAML
72
40
0
26 Oct 2021
Defensive Tensorization
Defensive Tensorization
Adrian Bulat
Jean Kossaifi
S. Bhattacharya
Yannis Panagakis
Timothy M. Hospedales
Georgios Tzimiropoulos
Nicholas D. Lane
Maja Pantic
AAML
35
4
0
26 Oct 2021
Adversarial Robustness in Multi-Task Learning: Promises and Illusions
Adversarial Robustness in Multi-Task Learning: Promises and Illusions
Salah Ghamizi
Maxime Cordy
Mike Papadakis
Yves Le Traon
OODAAML
90
18
0
26 Oct 2021
Ensemble Federated Adversarial Training with Non-IID data
Ensemble Federated Adversarial Training with Non-IID data
Shuang Luo
Didi Zhu
Zexi Li
Chao-Xiang Wu
FedML
66
7
0
26 Oct 2021
ADC: Adversarial attacks against object Detection that evade Context
  consistency checks
ADC: Adversarial attacks against object Detection that evade Context consistency checks
Mingjun Yin
Shasha Li
Chengyu Song
M. Salman Asif
Amit K. Roy-Chowdhury
S. Krishnamurthy
AAML
114
25
0
24 Oct 2021
A Layer-wise Adversarial-aware Quantization Optimization for Improving
  Robustness
A Layer-wise Adversarial-aware Quantization Optimization for Improving Robustness
Chang Song
Riya Ranjan
H. Li
MQ
67
4
0
23 Oct 2021
Improving Robustness using Generated Data
Improving Robustness using Generated Data
Sven Gowal
Sylvestre-Alvise Rebuffi
Olivia Wiles
Florian Stimberg
D. A. Calian
Timothy A. Mann
139
302
0
18 Oct 2021
Adversarial Attacks on ML Defense Models Competition
Adversarial Attacks on ML Defense Models Competition
Yinpeng Dong
Qi-An Fu
Xiao Yang
Wenzhao Xiang
Tianyu Pang
...
Zhennan Wu
Yang Guo
Jiequan Cui
Xiaogang Xu
Pengguang Chen
AAML
62
2
0
15 Oct 2021
Adversarial Purification through Representation Disentanglement
Adversarial Purification through Representation Disentanglement
Tao Bai
Jun Zhao
Lanqing Guo
Bihan Wen
AAML
37
1
0
15 Oct 2021
Interactive Analysis of CNN Robustness
Interactive Analysis of CNN Robustness
Stefan Sietzen
Mathias Lechner
Judy Borowski
Ramin Hasani
Manuela Waldner
AAML
81
10
0
14 Oct 2021
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
170
51
0
13 Oct 2021
Boosting the Certified Robustness of L-infinity Distance Nets
Boosting the Certified Robustness of L-infinity Distance Nets
Bohang Zhang
Du Jiang
Di He
Liwei Wang
OOD
93
30
0
13 Oct 2021
Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
  Robustness
Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial Robustness
Xiao Yang
Yinpeng Dong
Wenzhao Xiang
Tianyu Pang
Hang Su
Jun Zhu
AAML
66
4
0
13 Oct 2021
On the Security Risks of AutoML
On the Security Risks of AutoML
Ren Pang
Zhaohan Xi
S. Ji
Xiapu Luo
Ting Wang
AAML
54
10
0
12 Oct 2021
Parameterizing Activation Functions for Adversarial Robustness
Parameterizing Activation Functions for Adversarial Robustness
Sihui Dai
Saeed Mahloujifar
Prateek Mittal
AAML
84
32
0
11 Oct 2021
Intriguing Properties of Input-dependent Randomized Smoothing
Intriguing Properties of Input-dependent Randomized Smoothing
Peter Súkeník
A. Kuvshinov
Stephan Günnemann
AAMLUQCV
74
22
0
11 Oct 2021
Adversarial Token Attacks on Vision Transformers
Adversarial Token Attacks on Vision Transformers
Ameya Joshi
Gauri Jagatap
Chinmay Hegde
ViT
104
19
0
08 Oct 2021
Exploring Architectural Ingredients of Adversarially Robust Deep Neural
  Networks
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
Hanxun Huang
Yisen Wang
S. Erfani
Quanquan Gu
James Bailey
Xingjun Ma
AAMLTPM
139
102
0
07 Oct 2021
Previous
123...151617...373839
Next