ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXivPDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 764 papers shown
Title
Block-wise Image Transformation with Secret Key for Adversarially Robust
  Defense
Block-wise Image Transformation with Secret Key for Adversarially Robust Defense
Maungmaung Aprilpyone
Hitoshi Kiya
29
57
0
02 Oct 2020
Adversarial Robustness of Stabilized NeuralODEs Might be from Obfuscated
  Gradients
Adversarial Robustness of Stabilized NeuralODEs Might be from Obfuscated Gradients
Yifei Huang
Yaodong Yu
Hongyang R. Zhang
Yi Ma
Yuan Yao
AAML
37
26
0
28 Sep 2020
Optimal Provable Robustness of Quantum Classification via Quantum
  Hypothesis Testing
Optimal Provable Robustness of Quantum Classification via Quantum Hypothesis Testing
Maurice Weber
Nana Liu
Yue Liu
Ce Zhang
Zhikuan Zhao
AAML
37
28
0
21 Sep 2020
Adversarial Training with Stochastic Weight Average
Adversarial Training with Stochastic Weight Average
Joong-won Hwang
Youngwan Lee
Sungchan Oh
Yuseok Bae
OOD
AAML
31
11
0
21 Sep 2020
Certifying Confidence via Randomized Smoothing
Certifying Confidence via Randomized Smoothing
Aounon Kumar
Alexander Levine
S. Feizi
Tom Goldstein
UQCV
36
39
0
17 Sep 2020
Malicious Network Traffic Detection via Deep Learning: An Information
  Theoretic View
Malicious Network Traffic Detection via Deep Learning: An Information Theoretic View
Erick Galinkin
AAML
15
0
0
16 Sep 2020
Input Hessian Regularization of Neural Networks
Input Hessian Regularization of Neural Networks
Waleed Mustafa
Robert A. Vandermeulen
Marius Kloft
AAML
25
12
0
14 Sep 2020
Defending Against Multiple and Unforeseen Adversarial Videos
Defending Against Multiple and Unforeseen Adversarial Videos
Shao-Yuan Lo
Vishal M. Patel
AAML
31
23
0
11 Sep 2020
Quantifying the Preferential Direction of the Model Gradient in
  Adversarial Training With Projected Gradient Descent
Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent
Ricardo Bigolin Lanfredi
Joyce D. Schroeder
Tolga Tasdizen
27
11
0
10 Sep 2020
SoK: Certified Robustness for Deep Neural Networks
SoK: Certified Robustness for Deep Neural Networks
Linyi Li
Tao Xie
Yue Liu
AAML
38
128
0
09 Sep 2020
Adversarial Machine Learning in Image Classification: A Survey Towards
  the Defender's Perspective
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective
G. R. Machado
Eugênio Silva
R. Goldschmidt
AAML
33
157
0
08 Sep 2020
Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp
  Adversarial Attacks
Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks
Wei-An Lin
Chun Pong Lau
Alexander Levine
Ramalingam Chellappa
S. Feizi
AAML
81
60
0
05 Sep 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
Wenjie Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
21
215
0
04 Sep 2020
ASTRAL: Adversarial Trained LSTM-CNN for Named Entity Recognition
ASTRAL: Adversarial Trained LSTM-CNN for Named Entity Recognition
Jiuniu Wang
Wenjia Xu
Xingyu Fu
Guangluan Xu
Yirong Wu
27
57
0
02 Sep 2020
Simulating Unknown Target Models for Query-Efficient Black-box Attacks
Simulating Unknown Target Models for Query-Efficient Black-box Attacks
Chen Ma
L. Chen
Junhai Yong
MLAU
OOD
43
17
0
02 Sep 2020
Adversarially Robust Neural Architectures
Adversarially Robust Neural Architectures
Minjing Dong
Yanxi Li
Yunhe Wang
Chang Xu
AAML
OOD
47
48
0
02 Sep 2020
On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks
On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks
Deboleena Roy
I. Chakraborty
Timur Ibrayev
Kaushik Roy
AAML
19
4
0
27 Aug 2020
Yet Another Intermediate-Level Attack
Yet Another Intermediate-Level Attack
Qizhang Li
Yiwen Guo
Hao Chen
AAML
24
51
0
20 Aug 2020
Addressing Neural Network Robustness with Mixup and Targeted Labeling
  Adversarial Training
Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training
Alfred Laugros
A. Caplier
Matthieu Ospici
AAML
24
19
0
19 Aug 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
32
73
0
07 Aug 2020
Stronger and Faster Wasserstein Adversarial Attacks
Stronger and Faster Wasserstein Adversarial Attacks
Kaiwen Wu
Allen Wang
Yaoliang Yu
AAML
22
32
0
06 Aug 2020
Anti-Bandit Neural Architecture Search for Model Defense
Anti-Bandit Neural Architecture Search for Model Defense
Hanlin Chen
Baochang Zhang
Shenjun Xue
Xuan Gong
Hong Liu
Rongrong Ji
David Doermann
AAML
22
34
0
03 Aug 2020
Stylized Adversarial Defense
Stylized Adversarial Defense
Muzammal Naseer
Salman Khan
Munawar Hayat
Fahad Shahbaz Khan
Fatih Porikli
GAN
AAML
28
16
0
29 Jul 2020
RANDOM MASK: Towards Robust Convolutional Neural Networks
RANDOM MASK: Towards Robust Convolutional Neural Networks
Tiange Luo
Tianle Cai
Mengxiao Zhang
Siyu Chen
Liwei Wang
AAML
OOD
24
17
0
27 Jul 2020
Robust Machine Learning via Privacy/Rate-Distortion Theory
Robust Machine Learning via Privacy/Rate-Distortion Theory
Ye Wang
Shuchin Aeron
Adnan Siraj Rakin
T. Koike-Akino
P. Moulin
OOD
24
6
0
22 Jul 2020
Robust Image Classification Using A Low-Pass Activation Function and DCT
  Augmentation
Robust Image Classification Using A Low-Pass Activation Function and DCT Augmentation
Md Tahmid Hossain
S. Teng
Ferdous Sohel
Guojun Lu
26
10
0
18 Jul 2020
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic
  Speech Recognition and Speaker Identification Systems
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
H. Abdullah
Kevin Warren
Vincent Bindschaedler
Nicolas Papernot
Patrick Traynor
AAML
32
128
0
13 Jul 2020
Understanding Adversarial Examples from the Mutual Influence of Images
  and Perturbations
Understanding Adversarial Examples from the Mutual Influence of Images and Perturbations
Chaoning Zhang
Philipp Benz
Tooba Imtiaz
In-So Kweon
SSL
AAML
22
118
0
13 Jul 2020
Boundary thickness and robustness in learning models
Boundary thickness and robustness in learning models
Yaoqing Yang
Rekha Khanna
Yaodong Yu
A. Gholami
Kurt Keutzer
Joseph E. Gonzalez
Kannan Ramchandran
Michael W. Mahoney
OOD
18
37
0
09 Jul 2020
How benign is benign overfitting?
How benign is benign overfitting?
Amartya Sanyal
P. Dokania
Varun Kanade
Philip Torr
NoLa
AAML
27
57
0
08 Jul 2020
Generating Adversarial Examples with Controllable Non-transferability
Generating Adversarial Examples with Controllable Non-transferability
Renzhi Wang
Tianwei Zhang
Xiaofei Xie
Lei Ma
Cong Tian
Felix Juefei Xu
Yang Liu
SILM
AAML
17
3
0
02 Jul 2020
Improving Calibration through the Relationship with Adversarial
  Robustness
Improving Calibration through the Relationship with Adversarial Robustness
Yao Qin
Xuezhi Wang
Alex Beutel
Ed H. Chi
AAML
40
25
0
29 Jun 2020
Proper Network Interpretability Helps Adversarial Robustness in
  Classification
Proper Network Interpretability Helps Adversarial Robustness in Classification
Akhilan Boopathy
Sijia Liu
Gaoyuan Zhang
Cynthia Liu
Pin-Yu Chen
Shiyu Chang
Luca Daniel
AAML
FAtt
34
66
0
26 Jun 2020
Uncovering the Connections Between Adversarial Transferability and
  Knowledge Transferability
Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability
Kaizhao Liang
Jacky Y. Zhang
Wei Ping
Zhuolin Yang
Oluwasanmi Koyejo
Yangqiu Song
AAML
41
25
0
25 Jun 2020
Beware the Black-Box: on the Robustness of Recent Defenses to
  Adversarial Examples
Beware the Black-Box: on the Robustness of Recent Defenses to Adversarial Examples
Kaleel Mahmood
Deniz Gurevin
Marten van Dijk
Phuong Ha Nguyen
AAML
27
22
0
18 Jun 2020
AdvMind: Inferring Adversary Intent of Black-Box Attacks
AdvMind: Inferring Adversary Intent of Black-Box Attacks
Ren Pang
Xinyang Zhang
S. Ji
Xiapu Luo
Ting Wang
MLAU
AAML
11
29
0
16 Jun 2020
On the Loss Landscape of Adversarial Training: Identifying Challenges
  and How to Overcome Them
On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them
Chen Liu
Mathieu Salzmann
Tao R. Lin
Ryota Tomioka
Sabine Süsstrunk
AAML
24
81
0
15 Jun 2020
Adversarial Self-Supervised Contrastive Learning
Adversarial Self-Supervised Contrastive Learning
Minseon Kim
Jihoon Tack
Sung Ju Hwang
SSL
28
247
0
13 Jun 2020
Large-Scale Adversarial Training for Vision-and-Language Representation
  Learning
Large-Scale Adversarial Training for Vision-and-Language Representation Learning
Zhe Gan
Yen-Chun Chen
Linjie Li
Chen Zhu
Yu Cheng
Jingjing Liu
ObjD
VLM
35
489
0
11 Jun 2020
Provable tradeoffs in adversarially robust classification
Provable tradeoffs in adversarially robust classification
Yan Sun
Hamed Hassani
David Hong
Alexander Robey
23
53
0
09 Jun 2020
A Self-supervised Approach for Adversarial Robustness
A Self-supervised Approach for Adversarial Robustness
Muzammal Naseer
Salman Khan
Munawar Hayat
Fahad Shahbaz Khan
Fatih Porikli
AAML
24
251
0
08 Jun 2020
Tricking Adversarial Attacks To Fail
Tricking Adversarial Attacks To Fail
Blerta Lindqvist
AAML
16
0
0
08 Jun 2020
Domain Knowledge Alleviates Adversarial Attacks in Multi-Label
  Classifiers
Domain Knowledge Alleviates Adversarial Attacks in Multi-Label Classifiers
S. Melacci
Gabriele Ciravegna
Angelo Sotgiu
Ambra Demontis
Battista Biggio
Marco Gori
Fabio Roli
25
14
0
06 Jun 2020
Feature Purification: How Adversarial Training Performs Robust Deep
  Learning
Feature Purification: How Adversarial Training Performs Robust Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
MLT
AAML
39
148
0
20 May 2020
Increasing-Margin Adversarial (IMA) Training to Improve Adversarial
  Robustness of Neural Networks
Increasing-Margin Adversarial (IMA) Training to Improve Adversarial Robustness of Neural Networks
Linhai Ma
Liang Liang
AAML
31
18
0
19 May 2020
PatchGuard: A Provably Robust Defense against Adversarial Patches via
  Small Receptive Fields and Masking
PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking
Chong Xiang
A. Bhagoji
Vikash Sehwag
Prateek Mittal
AAML
30
29
0
17 May 2020
Encryption Inspired Adversarial Defense for Visual Classification
Encryption Inspired Adversarial Defense for Visual Classification
Maungmaung Aprilpyone
Hitoshi Kiya
24
32
0
16 May 2020
Towards Understanding the Adversarial Vulnerability of Skeleton-based
  Action Recognition
Towards Understanding the Adversarial Vulnerability of Skeleton-based Action Recognition
Tianhang Zheng
Sheng Liu
Changyou Chen
Junsong Yuan
Baochun Li
K. Ren
AAML
21
17
0
14 May 2020
DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses
DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses
Yaxin Li
Wei Jin
Han Xu
Jiliang Tang
AAML
32
131
0
13 May 2020
GraCIAS: Grassmannian of Corrupted Images for Adversarial Security
GraCIAS: Grassmannian of Corrupted Images for Adversarial Security
Ankita Shukla
Pavan Turaga
Saket Anand
AAML
16
1
0
06 May 2020
Previous
123...91011...141516
Next