ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXivPDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 721 papers shown
Title
Explainability and Robustness of Deep Visual Classification Models
Explainability and Robustness of Deep Visual Classification Models
Jindong Gu
AAML
47
2
0
03 Jan 2023
Guidance Through Surrogate: Towards a Generic Diagnostic Attack
Guidance Through Surrogate: Towards a Generic Diagnostic Attack
Muzammal Naseer
Salman Khan
Fatih Porikli
Fahad Shahbaz Khan
AAML
28
1
0
30 Dec 2022
On the Connection between Invariant Learning and Adversarial Training
  for Out-of-Distribution Generalization
On the Connection between Invariant Learning and Adversarial Training for Out-of-Distribution Generalization
Shiji Xin
Yifei Wang
Jingtong Su
Yisen Wang
OOD
23
7
0
18 Dec 2022
Confidence-aware Training of Smoothed Classifiers for Certified
  Robustness
Confidence-aware Training of Smoothed Classifiers for Certified Robustness
Jongheon Jeong
Seojin Kim
Jinwoo Shin
AAML
21
7
0
18 Dec 2022
Robust Explanation Constraints for Neural Networks
Robust Explanation Constraints for Neural Networks
Matthew Wicker
Juyeon Heo
Luca Costabello
Adrian Weller
FAtt
29
18
0
16 Dec 2022
Adversarial Example Defense via Perturbation Grading Strategy
Adversarial Example Defense via Perturbation Grading Strategy
Shaowei Zhu
Wanli Lyu
Bin Li
Z. Yin
Bin Luo
AAML
32
1
0
16 Dec 2022
On Evaluating Adversarial Robustness of Chest X-ray Classification:
  Pitfalls and Best Practices
On Evaluating Adversarial Robustness of Chest X-ray Classification: Pitfalls and Best Practices
Salah Ghamizi
Maxime Cordy
Michail Papadakis
Yves Le Traon
OOD
11
2
0
15 Dec 2022
Alternating Objectives Generates Stronger PGD-Based Adversarial Attacks
Alternating Objectives Generates Stronger PGD-Based Adversarial Attacks
Nikolaos Antoniou
Efthymios Georgiou
Alexandros Potamianos
AAML
29
5
0
15 Dec 2022
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models
Chengzhi Mao
Scott Geng
Junfeng Yang
Xin Eric Wang
Carl Vondrick
VLM
44
60
0
14 Dec 2022
Adversarially Robust Video Perception by Seeing Motion
Adversarially Robust Video Perception by Seeing Motion
Lingyu Zhang
Chengzhi Mao
Junfeng Yang
Carl Vondrick
VGen
AAML
44
2
0
13 Dec 2022
Robust Perception through Equivariance
Robust Perception through Equivariance
Chengzhi Mao
Lingyu Zhang
Abhishek Joshi
Junfeng Yang
Hongya Wang
Carl Vondrick
BDL
AAML
29
7
0
12 Dec 2022
REAP: A Large-Scale Realistic Adversarial Patch Benchmark
REAP: A Large-Scale Realistic Adversarial Patch Benchmark
Nabeel Hingun
Chawin Sitawarin
Jerry Li
David Wagner
AAML
31
14
0
12 Dec 2022
General Adversarial Defense Against Black-box Attacks via Pixel Level
  and Feature Level Distribution Alignments
General Adversarial Defense Against Black-box Attacks via Pixel Level and Feature Level Distribution Alignments
Xiaogang Xu
Hengshuang Zhao
Philip Torr
Jiaya Jia
AAML
32
2
0
11 Dec 2022
Leveraging Unlabeled Data to Track Memorization
Leveraging Unlabeled Data to Track Memorization
Mahsa Forouzesh
Hanie Sedghi
Patrick Thiran
NoLa
TDI
34
4
0
08 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and
  Privacy-preserving Supervised Learning
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
28
6
0
06 Dec 2022
Adversarial Rademacher Complexity of Deep Neural Networks
Adversarial Rademacher Complexity of Deep Neural Networks
Jiancong Xiao
Yanbo Fan
Ruoyu Sun
Zhimin Luo
AAML
17
22
0
27 Nov 2022
Reliable Robustness Evaluation via Automatically Constructed Attack
  Ensembles
Reliable Robustness Evaluation via Automatically Constructed Attack Ensembles
Shengcai Liu
Fu Peng
Jiaheng Zhang
AAML
39
11
0
23 Nov 2022
Benchmarking Adversarially Robust Quantum Machine Learning at Scale
Benchmarking Adversarially Robust Quantum Machine Learning at Scale
Maxwell T. West
S. Erfani
C. Leckie
M. Sevior
Lloyd C. L. Hollenberg
Muhammad Usman
AAML
OOD
30
33
0
23 Nov 2022
Understanding the Vulnerability of Skeleton-based Human Activity
  Recognition via Black-box Attack
Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack
Yunfeng Diao
He Wang
Tianjia Shao
Yong-Liang Yang
Kun Zhou
David C. Hogg
Meng Wang
AAML
42
7
0
21 Nov 2022
Boosting the Transferability of Adversarial Attacks with Global Momentum
  Initialization
Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization
Jiafeng Wang
Zhaoyu Chen
Kaixun Jiang
Dingkang Yang
Lingyi Hong
Pinxue Guo
Yan Wang
Wenqiang Zhang
AAML
33
27
0
21 Nov 2022
Towards Good Practices in Evaluating Transfer Adversarial Attacks
Towards Good Practices in Evaluating Transfer Adversarial Attacks
Zhengyu Zhao
Hanwei Zhang
Renjue Li
R. Sicre
Laurent Amsaleg
Michael Backes
AAML
27
20
0
17 Nov 2022
Textual Manifold-based Defense Against Natural Language Adversarial
  Examples
Textual Manifold-based Defense Against Natural Language Adversarial Examples
D. M. Nguyen
Anh Tuan Luu
AAML
27
17
0
05 Nov 2022
An Adversarial Robustness Perspective on the Topology of Neural Networks
An Adversarial Robustness Perspective on the Topology of Neural Networks
Morgane Goibert
Thomas Ricatte
Elvis Dohmatob
AAML
13
2
0
04 Nov 2022
Adversarial Attack on Radar-based Environment Perception Systems
Adversarial Attack on Radar-based Environment Perception Systems
Amira Guesmi
Ihsen Alouani
AAML
33
2
0
02 Nov 2022
Private and Reliable Neural Network Inference
Private and Reliable Neural Network Inference
Nikola Jovanović
Marc Fischer
Samuel Steffen
Martin Vechev
22
14
0
27 Oct 2022
Adversarial Purification with the Manifold Hypothesis
Adversarial Purification with the Manifold Hypothesis
Zhaoyuan Yang
Zhiwei Xu
Jing Zhang
Richard I. Hartley
Peter Tu
AAML
24
5
0
26 Oct 2022
Accelerating Certified Robustness Training via Knowledge Transfer
Accelerating Certified Robustness Training via Knowledge Transfer
Pratik Vaishnavi
Kevin Eykholt
Amir Rahmati
24
7
0
25 Oct 2022
Revisiting Sparse Convolutional Model for Visual Recognition
Revisiting Sparse Convolutional Model for Visual Recognition
Xili Dai
Mingyang Li
Pengyuan Zhai
Shengbang Tong
Xingjian Gao
Shao-Lun Huang
Zhihui Zhu
Chong You
Yi Ma
FAtt
35
27
0
24 Oct 2022
ADDMU: Detection of Far-Boundary Adversarial Examples with Data and
  Model Uncertainty Estimation
ADDMU: Detection of Far-Boundary Adversarial Examples with Data and Model Uncertainty Estimation
Fan Yin
Yao Li
Cho-Jui Hsieh
Kai-Wei Chang
AAML
69
4
0
22 Oct 2022
Emerging Threats in Deep Learning-Based Autonomous Driving: A
  Comprehensive Survey
Emerging Threats in Deep Learning-Based Autonomous Driving: A Comprehensive Survey
Huiyun Cao
Wenlong Zou
Yinkun Wang
Ting Song
Mengjun Liu
AAML
54
5
0
19 Oct 2022
Scaling Adversarial Training to Large Perturbation Bounds
Scaling Adversarial Training to Large Perturbation Bounds
Sravanti Addepalli
Samyak Jain
Gaurang Sriramanan
R. Venkatesh Babu
AAML
33
22
0
18 Oct 2022
When Adversarial Training Meets Vision Transformers: Recipes from
  Training to Architecture
When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture
Yi Mo
Dongxian Wu
Yifei Wang
Yiwen Guo
Yisen Wang
ViT
45
52
0
14 Oct 2022
Visual Prompting for Adversarial Robustness
Visual Prompting for Adversarial Robustness
Aochuan Chen
P. Lorenz
Yuguang Yao
Pin-Yu Chen
Sijia Liu
VLM
VPVLM
40
32
0
12 Oct 2022
Boosting Adversarial Robustness From The Perspective of Effective Margin
  Regularization
Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization
Ziquan Liu
Antoni B. Chan
AAML
33
5
0
11 Oct 2022
Towards Out-of-Distribution Adversarial Robustness
Towards Out-of-Distribution Adversarial Robustness
Adam Ibrahim
Charles Guille-Escuret
Ioannis Mitliagkas
Irina Rish
David M. Krueger
P. Bashivan
OOD
31
6
0
06 Oct 2022
Strength-Adaptive Adversarial Training
Strength-Adaptive Adversarial Training
Chaojian Yu
Dawei Zhou
Li Shen
Jun Yu
Bo Han
Biwei Huang
Nannan Wang
Tongliang Liu
OOD
17
2
0
04 Oct 2022
Stability Analysis and Generalization Bounds of Adversarial Training
Stability Analysis and Generalization Bounds of Adversarial Training
Jiancong Xiao
Yanbo Fan
Ruoyu Sun
Jue Wang
Zhimin Luo
AAML
32
30
0
03 Oct 2022
A Closer Look at Evaluating the Bit-Flip Attack Against Deep Neural
  Networks
A Closer Look at Evaluating the Bit-Flip Attack Against Deep Neural Networks
Kevin Hector
Mathieu Dumont
Pierre-Alain Moëllic
J. Dutertre
AAML
27
4
0
28 Sep 2022
Watch What You Pretrain For: Targeted, Transferable Adversarial Examples
  on Self-Supervised Speech Recognition models
Watch What You Pretrain For: Targeted, Transferable Adversarial Examples on Self-Supervised Speech Recognition models
R. Olivier
H. Abdullah
Bhiksha Raj
AAML
26
1
0
17 Sep 2022
Robustness in deep learning: The good (width), the bad (depth), and the
  ugly (initialization)
Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
V. Cevher
39
19
0
15 Sep 2022
PointACL:Adversarial Contrastive Learning for Robust Point Clouds
  Representation under Adversarial Attack
PointACL:Adversarial Contrastive Learning for Robust Point Clouds Representation under Adversarial Attack
Junxuan Huang
Yatong An
Lu Cheng
Bai Chen
Junsong Yuan
Chunming Qiao
3DPC
32
1
0
14 Sep 2022
CARE: Certifiably Robust Learning with Reasoning via Variational
  Inference
CARE: Certifiably Robust Learning with Reasoning via Variational Inference
Jiawei Zhang
Linyi Li
Ce Zhang
Bo-wen Li
AAML
OOD
43
8
0
12 Sep 2022
Attacking the Spike: On the Transferability and Security of Spiking
  Neural Networks to Adversarial Examples
Attacking the Spike: On the Transferability and Security of Spiking Neural Networks to Adversarial Examples
Nuo Xu
Kaleel Mahmood
Haowen Fang
Ethan Rathbun
Caiwen Ding
Wujie Wen
AAML
34
12
0
07 Sep 2022
Defending Against Backdoor Attack on Graph Nerual Network by
  Explainability
Defending Against Backdoor Attack on Graph Nerual Network by Explainability
B. Jiang
Zhao Li
AAML
GNN
64
16
0
07 Sep 2022
Unrestricted Adversarial Samples Based on Non-semantic Feature Clusters
  Substitution
Unrestricted Adversarial Samples Based on Non-semantic Feature Clusters Substitution
Ming-Kuai Zhou
Xiaobing Pei
AAML
16
0
0
31 Aug 2022
Unrestricted Black-box Adversarial Attack Using GAN with Limited Queries
Unrestricted Black-box Adversarial Attack Using GAN with Limited Queries
Dongbin Na
Sangwoo Ji
Jong Kim
AAML
35
17
0
24 Aug 2022
Adversarial Vulnerability of Temporal Feature Networks for Object
  Detection
Adversarial Vulnerability of Temporal Feature Networks for Object Detection
Svetlana Pavlitskaya
Nikolai Polley
Michael Weber
J. Marius Zöllner
AAML
14
2
0
23 Aug 2022
Two Heads are Better than One: Robust Learning Meets Multi-branch Models
Two Heads are Better than One: Robust Learning Meets Multi-branch Models
Dong Huang
Qi Bu
Yuhao Qing
Haowen Pi
Sen Wang
Heming Cui
OOD
AAML
32
0
0
17 Aug 2022
Ad Hoc Teamwork in the Presence of Adversaries
Ad Hoc Teamwork in the Presence of Adversaries
Ted Fujimoto
Samrat Chatterjee
A. Ganguly
35
2
0
09 Aug 2022
Robust Real-World Image Super-Resolution against Adversarial Attacks
Robust Real-World Image Super-Resolution against Adversarial Attacks
N. Babaguchi
John R. Smith
Pengxu Wei
T. Plagemann
Rong Yan
AAML
26
25
0
31 Jul 2022
Previous
123456...131415
Next