ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXivPDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 711 papers shown
Title
Interpreting Adversarial Examples by Activation Promotion and
  Suppression
Interpreting Adversarial Examples by Activation Promotion and Suppression
Kaidi Xu
Sijia Liu
Gaoyuan Zhang
Mengshu Sun
Pu Zhao
Quanfu Fan
Chuang Gan
X. Lin
AAML
FAtt
24
43
0
03 Apr 2019
Adversarial Defense by Restricting the Hidden Space of Deep Neural
  Networks
Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks
Aamir Mustafa
Salman Khan
Munawar Hayat
Roland Göcke
Jianbing Shen
Ling Shao
AAML
17
151
0
01 Apr 2019
Defending against adversarial attacks by randomized diversification
Defending against adversarial attacks by randomized diversification
O. Taran
Shideh Rezaeifar
T. Holotyak
Slava Voloshynovskiy
AAML
21
38
0
01 Apr 2019
Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search
Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search
Adnan Siraj Rakin
Zhezhi He
Deliang Fan
AAML
21
219
0
28 Mar 2019
Scaling up the randomized gradient-free adversarial attack reveals
  overestimation of robustness using established attacks
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
Francesco Croce
Jonas Rauber
Matthias Hein
AAML
20
30
0
27 Mar 2019
Defending against Whitebox Adversarial Attacks via Randomized
  Discretization
Defending against Whitebox Adversarial Attacks via Randomized Discretization
Yuchen Zhang
Percy Liang
AAML
32
75
0
25 Mar 2019
The Random Conditional Distribution for Higher-Order Probabilistic
  Inference
The Random Conditional Distribution for Higher-Order Probabilistic Inference
Zenna Tavares
Xin Zhang
Edgar Minaysan
Javier Burroni
Rajesh Ranganath
Armando Solar-Lezama
22
9
0
25 Mar 2019
The LogBarrier adversarial attack: making effective use of decision
  boundary information
The LogBarrier adversarial attack: making effective use of decision boundary information
Chris Finlay
Aram-Alexandre Pooladian
Adam M. Oberman
AAML
26
25
0
25 Mar 2019
Variational Inference with Latent Space Quantization for Adversarial
  Resilience
Variational Inference with Latent Space Quantization for Adversarial Resilience
Vinay Kyatham
P. PrathoshA.
Tarun Kumar Yadav
Deepak Mishra
Dheeraj Mundhra
AAML
19
3
0
24 Mar 2019
Semantics Preserving Adversarial Learning
Semantics Preserving Adversarial Learning
Ousmane Amadou Dia
Elnaz Barshan
Reza Babanezhad
AAML
GAN
29
2
0
10 Mar 2019
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial
  Perturbations
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations
Saeid Asgari Taghanaki
Kumar Abhishek
Shekoofeh Azizi
Ghassan Hamarneh
AAML
31
40
0
03 Mar 2019
Enhancing the Robustness of Deep Neural Networks by Boundary Conditional
  GAN
Enhancing the Robustness of Deep Neural Networks by Boundary Conditional GAN
Ke Sun
Zhanxing Zhu
Zhouchen Lin
AAML
19
20
0
28 Feb 2019
Adversarial Attack and Defense on Point Sets
Adversarial Attack and Defense on Point Sets
Jiancheng Yang
Qiang Zhang
Rongyao Fang
Bingbing Ni
Jinxian Liu
Qi Tian
3DPC
24
122
0
28 Feb 2019
Robust Decision Trees Against Adversarial Examples
Robust Decision Trees Against Adversarial Examples
Hongge Chen
Huan Zhang
Duane S. Boning
Cho-Jui Hsieh
AAML
20
116
0
27 Feb 2019
Disentangled Deep Autoencoding Regularization for Robust Image
  Classification
Disentangled Deep Autoencoding Regularization for Robust Image Classification
Zhenyu Duan
Martin Renqiang Min
Erran L. Li
Mingbo Cai
Yi Tian Xu
Bingbing Ni
13
2
0
27 Feb 2019
Verification of Non-Linear Specifications for Neural Networks
Verification of Non-Linear Specifications for Neural Networks
Chongli Qin
Krishnamurthy Dvijotham
Dvijotham
Brendan O'Donoghue
Rudy Bunel
Robert Stanforth
Sven Gowal
J. Uesato
G. Swirszcz
Pushmeet Kohli
AAML
14
43
0
25 Feb 2019
On the Sensitivity of Adversarial Robustness to Input Data Distributions
On the Sensitivity of Adversarial Robustness to Input Data Distributions
G. Ding
Kry Yik-Chau Lui
Xiaomeng Jin
Luyu Wang
Ruitong Huang
OOD
26
59
0
22 Feb 2019
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Eric Wong
Frank R. Schmidt
J. Zico Kolter
AAML
27
210
0
21 Feb 2019
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
Kevin Roth
Yannic Kilcher
Thomas Hofmann
AAML
27
175
0
13 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
17
1,995
0
08 Feb 2019
Adversarial Examples Are a Natural Consequence of Test Error in Noise
Adversarial Examples Are a Natural Consequence of Test Error in Noise
Nic Ford
Justin Gilmer
Nicholas Carlini
E. D. Cubuk
AAML
27
318
0
29 Jan 2019
Improving Adversarial Robustness via Promoting Ensemble Diversity
Improving Adversarial Robustness via Promoting Ensemble Diversity
Tianyu Pang
Kun Xu
Chao Du
Ning Chen
Jun Zhu
AAML
41
434
0
25 Jan 2019
Cross-Entropy Loss and Low-Rank Features Have Responsibility for
  Adversarial Examples
Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples
Kamil Nar
Orhan Ocal
S. Shankar Sastry
Kannan Ramchandran
AAML
27
54
0
24 Jan 2019
The Limitations of Adversarial Training and the Blind-Spot Attack
The Limitations of Adversarial Training and the Blind-Spot Attack
Huan Zhang
Hongge Chen
Zhao Song
Duane S. Boning
Inderjit S. Dhillon
Cho-Jui Hsieh
AAML
19
144
0
15 Jan 2019
ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia
  Classification System
ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System
Huangxun Chen
Chenyu Huang
Qianyi Huang
Qian Zhang
Wei Wang
AAML
31
26
0
12 Jan 2019
Characterizing and evaluating adversarial examples for Offline
  Handwritten Signature Verification
Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification
L. G. Hafemann
R. Sabourin
Luiz Eduardo Soares de Oliveira
AAML
11
42
0
10 Jan 2019
Image Super-Resolution as a Defense Against Adversarial Attacks
Image Super-Resolution as a Defense Against Adversarial Attacks
Aamir Mustafa
Salman H. Khan
Munawar Hayat
Jianbing Shen
Ling Shao
AAML
SupR
24
167
0
07 Jan 2019
Adversarial Attack and Defense on Graph Data: A Survey
Adversarial Attack and Defense on Graph Data: A Survey
Lichao Sun
Yingtong Dou
Carl Yang
Ji Wang
Yixin Liu
Philip S. Yu
Lifang He
Yangqiu Song
GNN
AAML
23
275
0
26 Dec 2018
A Multiversion Programming Inspired Approach to Detecting Audio
  Adversarial Examples
A Multiversion Programming Inspired Approach to Detecting Audio Adversarial Examples
Qiang Zeng
Jianhai Su
Chenglong Fu
Golam Kayas
Lannan Luo
AAML
27
46
0
26 Dec 2018
Why ReLU networks yield high-confidence predictions far away from the
  training data and how to mitigate the problem
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
Matthias Hein
Maksym Andriushchenko
Julian Bitterwolf
OODD
55
553
0
13 Dec 2018
Adversarial Framing for Image and Video Classification
Adversarial Framing for Image and Video Classification
Konrad Zolna
Michal Zajac
Negar Rostamzadeh
Pedro H. O. Pinheiro
AAML
30
60
0
11 Dec 2018
On the Security of Randomized Defenses Against Adversarial Samples
On the Security of Randomized Defenses Against Adversarial Samples
K. Sharad
G. Marson
H. Truong
Ghassan O. Karame
AAML
25
1
0
11 Dec 2018
Defending Against Universal Perturbations With Shared Adversarial
  Training
Defending Against Universal Perturbations With Shared Adversarial Training
Chaithanya Kumar Mummadi
Thomas Brox
J. H. Metzen
AAML
18
60
0
10 Dec 2018
Feature Denoising for Improving Adversarial Robustness
Feature Denoising for Improving Adversarial Robustness
Cihang Xie
Yuxin Wu
L. V. D. van der Maaten
Alan Yuille
Kaiming He
44
904
0
09 Dec 2018
MMA Training: Direct Input Space Margin Maximization through Adversarial
  Training
MMA Training: Direct Input Space Margin Maximization through Adversarial Training
G. Ding
Yash Sharma
Kry Yik-Chau Lui
Ruitong Huang
AAML
21
270
0
06 Dec 2018
Random Spiking and Systematic Evaluation of Defenses Against Adversarial
  Examples
Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples
Huangyi Ge
Sze Yiu Chau
Bruno Ribeiro
Ninghui Li
AAML
27
1
0
05 Dec 2018
Prototype-based Neural Network Layers: Incorporating Vector Quantization
Prototype-based Neural Network Layers: Incorporating Vector Quantization
S. Saralajew
Lars Holdijk
Maike Rees
T. Villmann
MQ
22
15
0
04 Dec 2018
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAML
AI4CE
24
169
0
03 Dec 2018
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
173
288
0
02 Dec 2018
Discrete Adversarial Attacks and Submodular Optimization with
  Applications to Text Classification
Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification
Qi Lei
Lingfei Wu
Pin-Yu Chen
A. Dimakis
Inderjit S. Dhillon
Michael Witbrock
AAML
15
92
0
01 Dec 2018
Bayesian Adversarial Spheres: Bayesian Inference and Adversarial
  Examples in a Noiseless Setting
Bayesian Adversarial Spheres: Bayesian Inference and Adversarial Examples in a Noiseless Setting
Artur Bekasov
Iain Murray
AAML
BDL
20
14
0
29 Nov 2018
A randomized gradient-free attack on ReLU networks
A randomized gradient-free attack on ReLU networks
Francesco Croce
Matthias Hein
AAML
37
21
0
28 Nov 2018
Noisy Computations during Inference: Harmful or Helpful?
Noisy Computations during Inference: Harmful or Helpful?
Minghai Qin
D. Vučinić
AAML
11
5
0
26 Nov 2018
Robustness via curvature regularization, and vice versa
Robustness via curvature regularization, and vice versa
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
J. Uesato
P. Frossard
AAML
29
318
0
23 Nov 2018
Scalable agent alignment via reward modeling: a research direction
Scalable agent alignment via reward modeling: a research direction
Jan Leike
David M. Krueger
Tom Everitt
Miljan Martic
Vishal Maini
Shane Legg
34
397
0
19 Nov 2018
Mathematical Analysis of Adversarial Attacks
Mathematical Analysis of Adversarial Attacks
Zehao Dou
Stanley J. Osher
Bao Wang
AAML
24
18
0
15 Nov 2018
Theoretical Analysis of Adversarial Learning: A Minimax Approach
Theoretical Analysis of Adversarial Learning: A Minimax Approach
Zhuozhuo Tu
Jingwei Zhang
Dacheng Tao
AAML
15
68
0
13 Nov 2018
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning
K. Makarychev
Pascal Dupré
Yury Makarychev
Giancarlo Pellegrino
Dan Boneh
AAML
29
64
0
08 Nov 2018
MixTrain: Scalable Training of Verifiably Robust Neural Networks
MixTrain: Scalable Training of Verifiably Robust Neural Networks
Yue Zhang
Yizheng Chen
Ahmed Abdou
Mohsen Guizani
AAML
21
23
0
06 Nov 2018
Learning to Defend by Learning to Attack
Learning to Defend by Learning to Attack
Haoming Jiang
Zhehui Chen
Yuyang Shi
Bo Dai
T. Zhao
18
22
0
03 Nov 2018
Previous
123...12131415
Next