ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
v1v2v3v4 (latest)

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXiv (abs)PDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 1,929 papers shown
Title
Adaptive Batch Normalization Networks for Adversarial Robustness
Adaptive Batch Normalization Networks for Adversarial Robustness
Shao-Yuan Lo
Vishal M. Patel
AAMLOOD
63
1
0
20 May 2024
Certified $\ell_2$ Attribution Robustness via Uniformly Smoothed
  Attributions
Certified ℓ2\ell_2ℓ2​ Attribution Robustness via Uniformly Smoothed Attributions
Fan Wang
Adams Wai-Kin Kong
66
2
0
10 May 2024
Evaluating Adversarial Robustness in the Spatial Frequency Domain
Evaluating Adversarial Robustness in the Spatial Frequency Domain
Keng-Hsin Liao
Chin-Yuan Yeh
Hsi-Wen Chen
Ming-Syan Chen
69
0
0
10 May 2024
Towards Accurate and Robust Architectures via Neural Architecture Search
Towards Accurate and Robust Architectures via Neural Architecture Search
Yuwei Ou
Yuqi Feng
Yanan Sun
AAML
55
2
0
09 May 2024
Cutting through buggy adversarial example defenses: fixing 1 line of
  code breaks Sabre
Cutting through buggy adversarial example defenses: fixing 1 line of code breaks Sabre
Nicholas Carlini
AAML
58
2
0
06 May 2024
Certification of Speaker Recognition Models to Additive Perturbations
Certification of Speaker Recognition Models to Additive Perturbations
Dmitrii Korzh
Elvir Karimov
Mikhail Aleksandrovich Pautov
Oleg Y. Rogov
Ivan Oseledets
85
3
0
29 Apr 2024
Exploring the Robustness of In-Context Learning with Noisy Labels
Exploring the Robustness of In-Context Learning with Noisy Labels
Chen Cheng
Xinzhi Yu
Haodong Wen
Jinsong Sun
Guanzhang Yue
Yihao Zhang
Zeming Wei
NoLa
61
8
0
28 Apr 2024
Attacking Bayes: On the Adversarial Robustness of Bayesian Neural
  Networks
Attacking Bayes: On the Adversarial Robustness of Bayesian Neural Networks
Yunzhen Feng
Tim G. J. Rudner
Nikolaos Tsilivis
Julia Kempe
AAMLBDL
115
2
0
27 Apr 2024
Steal Now and Attack Later: Evaluating Robustness of Object Detection
  against Black-box Adversarial Attacks
Steal Now and Attack Later: Evaluating Robustness of Object Detection against Black-box Adversarial Attacks
Erh-Chung Chen
Pin-Yu Chen
I-Hsin Chung
Che-Rung Lee
AAML
67
2
0
24 Apr 2024
Towards Understanding the Robustness of Diffusion-Based Purification: A Stochastic Perspective
Towards Understanding the Robustness of Diffusion-Based Purification: A Stochastic Perspective
Yiming Liu
Kezhao Liu
Yao Xiao
Ziyi Dong
Xiaogang Xu
Pengxu Wei
Liang Lin
DiffM
63
2
0
22 Apr 2024
Struggle with Adversarial Defense? Try Diffusion
Struggle with Adversarial Defense? Try Diffusion
Yujie Li
Yanbin Wang
Peiyue Li
Bin Liu
Jianguo Sun
Yifan Jia
Wenrui Ma
DiffM
62
1
0
12 Apr 2024
Eliminating Catastrophic Overfitting Via Abnormal Adversarial Examples
  Regularization
Eliminating Catastrophic Overfitting Via Abnormal Adversarial Examples Regularization
Runqi Lin
Chaojian Yu
Tongliang Liu
AAML
113
12
0
11 Apr 2024
LRR: Language-Driven Resamplable Continuous Representation against
  Adversarial Tracking Attacks
LRR: Language-Driven Resamplable Continuous Representation against Adversarial Tracking Attacks
Jianlang Chen
Xuhong Ren
Qing Guo
Felix Juefei Xu
Di Lin
Wei Feng
Lei Ma
Jianjun Zhao
84
1
0
09 Apr 2024
Towards Robust Domain Generation Algorithm Classification
Towards Robust Domain Generation Algorithm Classification
Arthur Drichel
Marc Meyer
Ulrike Meyer
AAML
72
3
0
09 Apr 2024
Investigating the Impact of Quantization on Adversarial Robustness
Investigating the Impact of Quantization on Adversarial Robustness
Qun Li
Yuan Meng
Chen Tang
Jiacheng Jiang
Zhi Wang
79
5
0
08 Apr 2024
Certified PEFTSmoothing: Parameter-Efficient Fine-Tuning with Randomized
  Smoothing
Certified PEFTSmoothing: Parameter-Efficient Fine-Tuning with Randomized Smoothing
Chengyan Fu
Wenjie Wang
AAML
87
0
0
08 Apr 2024
BruSLeAttack: A Query-Efficient Score-Based Black-Box Sparse Adversarial
  Attack
BruSLeAttack: A Query-Efficient Score-Based Black-Box Sparse Adversarial Attack
Viet Vo
Ehsan Abbasnejad
Damith C. Ranasinghe
AAML
92
5
0
08 Apr 2024
Defense without Forgetting: Continual Adversarial Defense with
  Anisotropic & Isotropic Pseudo Replay
Defense without Forgetting: Continual Adversarial Defense with Anisotropic & Isotropic Pseudo Replay
Yuhang Zhou
Zhongyun Hua
AAMLCLL
96
4
0
02 Apr 2024
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks
Maksym Andriushchenko
Francesco Croce
Nicolas Flammarion
AAML
204
222
0
02 Apr 2024
Embodied Active Defense: Leveraging Recurrent Feedback to Counter
  Adversarial Patches
Embodied Active Defense: Leveraging Recurrent Feedback to Counter Adversarial Patches
Lingxuan Wu
Xiao Yang
Yinpeng Dong
Liuwei Xie
Hang Su
Jun Zhu
AAML
77
2
0
31 Mar 2024
MMCert: Provable Defense against Adversarial Attacks to Multi-modal
  Models
MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models
Yanting Wang
Hongye Fu
Wei Zou
Jinyuan Jia
AAML
49
2
0
28 Mar 2024
Bayesian Learned Models Can Detect Adversarial Malware For Free
Bayesian Learned Models Can Detect Adversarial Malware For Free
Bao Gia Doan
Dang Quang Nguyen
Paul Montague
Tamas Abraham
O. Vel
S. Çamtepe
S. Kanhere
Ehsan Abbasnejad
Damith C. Ranasinghe
AAML
68
1
0
27 Mar 2024
The Anatomy of Adversarial Attacks: Concept-based XAI Dissection
The Anatomy of Adversarial Attacks: Concept-based XAI Dissection
Georgii Mikriukov
Gesina Schwalbe
Franz Motzkus
Korinna Bade
AAML
77
1
0
25 Mar 2024
Ensemble Adversarial Defense via Integration of Multiple Dispersed Low
  Curvature Models
Ensemble Adversarial Defense via Integration of Multiple Dispersed Low Curvature Models
Kaikang Zhao
Xi Chen
Wei Huang
Liuxin Ding
Xianglong Kong
Fan Zhang
AAML
75
1
0
25 Mar 2024
Testing the Limits of Jailbreaking Defenses with the Purple Problem
Testing the Limits of Jailbreaking Defenses with the Purple Problem
Taeyoun Kim
Suhas Kotha
Aditi Raghunathan
AAML
84
6
0
20 Mar 2024
Robustness Verifcation in Neural Networks
Robustness Verifcation in Neural Networks
Adrian Wurm
69
0
0
20 Mar 2024
Certified Human Trajectory Prediction
Certified Human Trajectory Prediction
Mohammadhossein Bahari
Saeed Saadatnejad
Amirhossein Asgari-Farsangi
Seyed-Mohsen Moosavi-Dezfooli
Alexandre Alahi
AAML
118
1
0
20 Mar 2024
Robust NAS under adversarial training: benchmark, theory, and beyond
Robust NAS under adversarial training: benchmark, theory, and beyond
Yongtao Wu
Fanghui Liu
Carl-Johann Simon-Gabriel
Grigorios G. Chrysos
Volkan Cevher
AAMLOOD
92
5
0
19 Mar 2024
ADAPT to Robustify Prompt Tuning Vision Transformers
ADAPT to Robustify Prompt Tuning Vision Transformers
Masih Eskandar
Tooba Imtiaz
Zifeng Wang
Jennifer Dy
VPVLMVLMAAML
92
0
0
19 Mar 2024
Towards Adversarially Robust Dataset Distillation by Curvature Regularization
Towards Adversarially Robust Dataset Distillation by Curvature Regularization
Eric Xue
Yijiang Li
Haoyang Liu
Yifan Shen
Haohan Wang
Haohan Wang
DD
170
8
0
15 Mar 2024
Towards White Box Deep Learning
Towards White Box Deep Learning
Maciej Satkiewicz
AAML
71
1
0
14 Mar 2024
Counter-Samples: A Stateless Strategy to Neutralize Black Box
  Adversarial Attacks
Counter-Samples: A Stateless Strategy to Neutralize Black Box Adversarial Attacks
Roey Bokobza
Yisroel Mirsky
AAML
71
0
0
14 Mar 2024
Soften to Defend: Towards Adversarial Robustness via Self-Guided Label
  Refinement
Soften to Defend: Towards Adversarial Robustness via Self-Guided Label Refinement
Daiwei Yu
Zhuorong Li
Lina Wei
Canghong Jin
Yun Zhang
Sixian Chan
109
6
0
14 Mar 2024
Versatile Defense Against Adversarial Attacks on Image Recognition
Versatile Defense Against Adversarial Attacks on Image Recognition
Haibo Zhang
Zhihua Yao
Kouichi Sakurai
AAML
46
2
0
13 Mar 2024
PeerAiD: Improving Adversarial Distillation from a Specialized Peer
  Tutor
PeerAiD: Improving Adversarial Distillation from a Specialized Peer Tutor
Jaewon Jung
Hongsun Jang
Jaeyong Song
Jinho Lee
OODAAML
259
6
0
11 Mar 2024
Are Classification Robustness and Explanation Robustness Really Strongly
  Correlated? An Analysis Through Input Loss Landscape
Are Classification Robustness and Explanation Robustness Really Strongly Correlated? An Analysis Through Input Loss Landscape
Tiejin Chen
Wenwang Huang
Linsey Pang
Dongsheng Luo
Hua Wei
OOD
64
0
0
09 Mar 2024
Exploring the Adversarial Frontier: Quantifying Robustness via
  Adversarial Hypervolume
Exploring the Adversarial Frontier: Quantifying Robustness via Adversarial Hypervolume
Ping Guo
Cheng Gong
Xi Lin
Zhiyuan Yang
Qingfu Zhang
AAML
76
2
0
08 Mar 2024
DPAdapter: Improving Differentially Private Deep Learning through Noise
  Tolerance Pre-training
DPAdapter: Improving Differentially Private Deep Learning through Noise Tolerance Pre-training
Zihao Wang
Rui Zhu
Dongruo Zhou
Zhikun Zhang
John C. Mitchell
Haixu Tang
Xiaofeng Wang
AAML
80
6
0
05 Mar 2024
One Prompt Word is Enough to Boost Adversarial Robustness for
  Pre-trained Vision-Language Models
One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models
Lin Li
Haoyan Guan
Jianing Qiu
Michael W. Spratling
AAMLVLMVPVLM
99
24
0
04 Mar 2024
Enhancing Tracking Robustness with Auxiliary Adversarial Defense
  Networks
Enhancing Tracking Robustness with Auxiliary Adversarial Defense Networks
Zhewei Wu
Ruilong Yu
Qihe Liu
Shuying Cheng
Shilin Qiu
Shijie Zhou
AAML
84
0
0
28 Feb 2024
Extreme Miscalibration and the Illusion of Adversarial Robustness
Extreme Miscalibration and the Illusion of Adversarial Robustness
Vyas Raina
Samson Tan
Volkan Cevher
Aditya Rawal
Sheng Zha
George Karypis
AAML
86
3
0
27 Feb 2024
A Curious Case of Remarkable Resilience to Gradient Attacks via Fully
  Convolutional and Differentiable Front End with a Skip Connection
A Curious Case of Remarkable Resilience to Gradient Attacks via Fully Convolutional and Differentiable Front End with a Skip Connection
Leonid Boytsov
Ameya Joshi
Filipe Condessa
AAML
45
0
0
26 Feb 2024
Edge Detectors Can Make Deep Convolutional Neural Networks More Robust
Edge Detectors Can Make Deep Convolutional Neural Networks More Robust
Jin Ding
Jie-Chao Zhao
Yong-zhi Sun
Ping Tan
Jia-Wei Wang
Ji-en Ma
You-tong Fang
AAML
98
2
0
26 Feb 2024
Immunization against harmful fine-tuning attacks
Immunization against harmful fine-tuning attacks
Domenic Rosati
Jan Wehner
Kai Williams
Lukasz Bartoszcze
Jan Batzner
Hassan Sajjad
Frank Rudzicz
AAML
107
22
0
26 Feb 2024
On the Duality Between Sharpness-Aware Minimization and Adversarial
  Training
On the Duality Between Sharpness-Aware Minimization and Adversarial Training
Yihao Zhang
Hangzhou He
Jingyu Zhu
Huanran Chen
Yifei Wang
Zeming Wei
AAML
127
15
0
23 Feb 2024
Stop Reasoning! When Multimodal LLMs with Chain-of-Thought Reasoning
  Meets Adversarial Images
Stop Reasoning! When Multimodal LLMs with Chain-of-Thought Reasoning Meets Adversarial Images
Zefeng Wang
Zhen Han
Shuo Chen
Fan Xue
Zifeng Ding
Xun Xiao
Volker Tresp
Philip Torr
Jindong Gu
LRM
99
18
0
22 Feb 2024
Coercing LLMs to do and reveal (almost) anything
Coercing LLMs to do and reveal (almost) anything
Jonas Geiping
Alex Stein
Manli Shu
Khalid Saifullah
Yuxin Wen
Tom Goldstein
AAML
85
55
0
21 Feb 2024
Rigor with Machine Learning from Field Theory to the Poincaré
  Conjecture
Rigor with Machine Learning from Field Theory to the Poincaré Conjecture
Sergei Gukov
James Halverson
Fabian Ruehle
AI4CE
79
16
0
20 Feb 2024
DART: A Principled Approach to Adversarially Robust Unsupervised Domain
  Adaptation
DART: A Principled Approach to Adversarially Robust Unsupervised Domain Adaptation
Yunjuan Wang
Hussein Hazimeh
Natalia Ponomareva
Alexey Kurakin
Ibrahim Hammoud
Raman Arora
OODAAML
77
0
0
16 Feb 2024
Accuracy of TextFooler black box adversarial attacks on 01 loss sign
  activation neural network ensemble
Accuracy of TextFooler black box adversarial attacks on 01 loss sign activation neural network ensemble
Yunzhe Xue
Usman Roshan
AAML
62
0
0
12 Feb 2024
Previous
12345...373839
Next