ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
v1v2v3v4 (latest)

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXiv (abs)PDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 1,929 papers shown
Title
Random Spiking and Systematic Evaluation of Defenses Against Adversarial
  Examples
Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples
Huangyi Ge
Sze Yiu Chau
Bruno Ribeiro
Ninghui Li
AAML
41
1
0
05 Dec 2018
Prototype-based Neural Network Layers: Incorporating Vector Quantization
Prototype-based Neural Network Layers: Incorporating Vector Quantization
S. Saralajew
Lars Holdijk
Maike Rees
T. Villmann
MQ
61
15
0
04 Dec 2018
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAMLAI4CE
138
174
0
03 Dec 2018
Disentangling Adversarial Robustness and Generalization
Disentangling Adversarial Robustness and Generalization
David Stutz
Matthias Hein
Bernt Schiele
AAMLOOD
311
285
0
03 Dec 2018
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
240
294
0
02 Dec 2018
Effects of Loss Functions And Target Representations on Adversarial
  Robustness
Effects of Loss Functions And Target Representations on Adversarial Robustness
Sean Saito
S. Roy
AAML
72
7
0
01 Dec 2018
Discrete Adversarial Attacks and Submodular Optimization with
  Applications to Text Classification
Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification
Qi Lei
Lingfei Wu
Pin-Yu Chen
A. Dimakis
Inderjit S. Dhillon
Michael Witbrock
AAML
102
92
0
01 Dec 2018
Adversarial Defense by Stratified Convolutional Sparse Coding
Adversarial Defense by Stratified Convolutional Sparse Coding
Bo Sun
Nian-hsuan Tsai
Fangchen Liu
Ronald Yu
Hao Su
AAML
80
76
0
30 Nov 2018
Adversarial Examples as an Input-Fault Tolerance Problem
Adversarial Examples as an Input-Fault Tolerance Problem
A. Galloway
A. Golubeva
Graham W. Taylor
SILMAAML
38
0
0
30 Nov 2018
CNN-Cert: An Efficient Framework for Certifying Robustness of
  Convolutional Neural Networks
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
Akhilan Boopathy
Tsui-Wei Weng
Pin-Yu Chen
Sijia Liu
Luca Daniel
AAML
158
138
0
29 Nov 2018
Bayesian Adversarial Spheres: Bayesian Inference and Adversarial
  Examples in a Noiseless Setting
Bayesian Adversarial Spheres: Bayesian Inference and Adversarial Examples in a Noiseless Setting
Artur Bekasov
Iain Murray
AAMLBDL
63
14
0
29 Nov 2018
A randomized gradient-free attack on ReLU networks
A randomized gradient-free attack on ReLU networks
Francesco Croce
Matthias Hein
AAML
74
21
0
28 Nov 2018
Universal Adversarial Training
Universal Adversarial Training
A. Mendrik
Mahyar Najibi
Zheng Xu
John P. Dickerson
L. Davis
Tom Goldstein
AAMLOOD
102
190
0
27 Nov 2018
ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and
  Robust Accuracies
ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies
Bao Wang
Binjie Yuan
Zuoqiang Shi
Stanley J. Osher
AAMLOOD
78
15
0
26 Nov 2018
Bilateral Adversarial Training: Towards Fast Training of More Robust
  Models Against Adversarial Attacks
Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks
Jianyu Wang
Haichao Zhang
OODAAML
87
119
0
26 Nov 2018
Noisy Computations during Inference: Harmful or Helpful?
Noisy Computations during Inference: Harmful or Helpful?
Minghai Qin
D. Vučinić
AAML
31
5
0
26 Nov 2018
Attention, Please! Adversarial Defense via Activation Rectification and
  Preservation
Attention, Please! Adversarial Defense via Activation Rectification and Preservation
Shangxi Wu
Jitao Sang
Kaiyuan Xu
Jiaming Zhang
Jian Yu
AAML
52
7
0
24 Nov 2018
Robustness via curvature regularization, and vice versa
Robustness via curvature regularization, and vice versa
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
J. Uesato
P. Frossard
AAML
105
319
0
23 Nov 2018
Decoupling Direction and Norm for Efficient Gradient-Based L2
  Adversarial Attacks and Defenses
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses
Jérôme Rony
L. G. Hafemann
Luiz Eduardo Soares de Oliveira
Ismail Ben Ayed
R. Sabourin
Eric Granger
AAML
78
299
0
23 Nov 2018
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural
  Network Robustness against Adversarial Attack
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack
Adnan Siraj Rakin
Zhezhi He
Deliang Fan
AAML
67
292
0
22 Nov 2018
Strength in Numbers: Trading-off Robustness and Computation via
  Adversarially-Trained Ensembles
Strength in Numbers: Trading-off Robustness and Computation via Adversarially-Trained Ensembles
Edward Grefenstette
Robert Stanforth
Brendan O'Donoghue
J. Uesato
G. Swirszcz
Pushmeet Kohli
AAML
80
18
0
22 Nov 2018
Detecting Adversarial Perturbations Through Spatial Behavior in
  Activation Spaces
Detecting Adversarial Perturbations Through Spatial Behavior in Activation Spaces
Ziv Katzir
Yuval Elovici
AAML
60
26
0
22 Nov 2018
Task-generalizable Adversarial Attack based on Perceptual Metric
Task-generalizable Adversarial Attack based on Perceptual Metric
Muzammal Naseer
Salman H. Khan
Shafin Rahman
Fatih Porikli
AAML
73
40
0
22 Nov 2018
How the Softmax Output is Misleading for Evaluating the Strength of
  Adversarial Examples
How the Softmax Output is Misleading for Evaluating the Strength of Adversarial Examples
Utku Ozbulak
W. D. Neve
Arnout Van Messem
AAML
39
7
0
21 Nov 2018
MimicGAN: Corruption-Mimicking for Blind Image Recovery & Adversarial
  Defense
MimicGAN: Corruption-Mimicking for Blind Image Recovery & Adversarial Defense
Rushil Anirudh
Jayaraman J. Thiagarajan
B. Kailkhura
T. Bremer
GAN
53
2
0
20 Nov 2018
Intermediate Level Adversarial Attack for Enhanced Transferability
Intermediate Level Adversarial Attack for Enhanced Transferability
Qian Huang
Zeqi Gu
Isay Katsman
Horace He
Pian Pawakapan
Zhiqiu Lin
Serge J. Belongie
Ser-Nam Lim
AAMLSILM
54
4
0
20 Nov 2018
Lightweight Lipschitz Margin Training for Certified Defense against
  Adversarial Examples
Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples
Hajime Ono
Tsubasa Takahashi
Kazuya Kakizaki
AAML
49
4
0
20 Nov 2018
Stackelberg GAN: Towards Provable Minimax Equilibrium via
  Multi-Generator Architectures
Stackelberg GAN: Towards Provable Minimax Equilibrium via Multi-Generator Architectures
Hongyang R. Zhang
Susu Xu
Jiantao Jiao
P. Xie
Ruslan Salakhutdinov
Eric Xing
71
23
0
19 Nov 2018
Optimal Transport Classifier: Defending Against Adversarial Attacks by
  Regularized Deep Embedding
Optimal Transport Classifier: Defending Against Adversarial Attacks by Regularized Deep Embedding
Yao Li
Martin Renqiang Min
Wenchao Yu
Cho-Jui Hsieh
T. C. Lee
E. Kruus
OT
60
7
0
19 Nov 2018
Scalable agent alignment via reward modeling: a research direction
Scalable agent alignment via reward modeling: a research direction
Jan Leike
David M. Krueger
Tom Everitt
Miljan Martic
Vishal Maini
Shane Legg
124
420
0
19 Nov 2018
Generalizable Adversarial Training via Spectral Normalization
Generalizable Adversarial Training via Spectral Normalization
Farzan Farnia
Jesse M. Zhang
David Tse
OODAAML
83
140
0
19 Nov 2018
A Spectral View of Adversarially Robust Features
A Spectral View of Adversarially Robust Features
Shivam Garg
Vatsal Sharan
B. Zhang
Gregory Valiant
AAML
154
21
0
15 Nov 2018
Mathematical Analysis of Adversarial Attacks
Mathematical Analysis of Adversarial Attacks
Zehao Dou
Stanley J. Osher
Bao Wang
AAML
67
18
0
15 Nov 2018
Theoretical Analysis of Adversarial Learning: A Minimax Approach
Theoretical Analysis of Adversarial Learning: A Minimax Approach
Zhuozhuo Tu
Jingwei Zhang
Dacheng Tao
AAML
72
68
0
13 Nov 2018
New CleverHans Feature: Better Adversarial Robustness Evaluations with
  Attack Bundling
New CleverHans Feature: Better Adversarial Robustness Evaluations with Attack Bundling
Ian Goodfellow
AAML
25
2
0
08 Nov 2018
A Geometric Perspective on the Transferability of Adversarial Directions
A Geometric Perspective on the Transferability of Adversarial Directions
Duncan C. McElfresh
H. Bidkhori
Dimitris Papailiopoulos
AAML
50
17
0
08 Nov 2018
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning
K. Makarychev
Pascal Dupré
Yury Makarychev
Giancarlo Pellegrino
Dan Boneh
AAML
104
64
0
08 Nov 2018
MixTrain: Scalable Training of Verifiably Robust Neural Networks
MixTrain: Scalable Training of Verifiably Robust Neural Networks
Yue Zhang
Yizheng Chen
Ahmed Abdou
Mohsen Guizani
AAML
43
23
0
06 Nov 2018
Blockchain and human episodic memory
Blockchain and human episodic memory
S. Cho
Cody A Cushing
Kunal Patel
Alok Kothari
Rongjian Lan
Matthew Mattina
Mouslim Cherkaoui
Hakwan Lau
11
1
0
06 Nov 2018
SSCNets: Robustifying DNNs using Secure Selective Convolutional Filters
SSCNets: Robustifying DNNs using Secure Selective Convolutional Filters
Hassan Ali
Faiq Khalid
Hammad Tariq
Muhammad Abdullah Hanif
Semeen Rehman
Rehan Ahmed
Mohamed Bennai
AAML
133
14
0
04 Nov 2018
Learning to Defend by Learning to Attack
Learning to Defend by Learning to Attack
Haoming Jiang
Zhehui Chen
Yuyang Shi
Bo Dai
T. Zhao
108
22
0
03 Nov 2018
Semidefinite relaxations for certifying robustness to adversarial
  examples
Semidefinite relaxations for certifying robustness to adversarial examples
Aditi Raghunathan
Jacob Steinhardt
Percy Liang
AAML
124
439
0
02 Nov 2018
Towards Adversarial Malware Detection: Lessons Learned from PDF-based
  Attacks
Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks
Davide Maiorca
Battista Biggio
Giorgio Giacinto
AAML
80
47
0
02 Nov 2018
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Pang Wei Koh
Jacob Steinhardt
Percy Liang
110
244
0
02 Nov 2018
On the Geometry of Adversarial Examples
On the Geometry of Adversarial Examples
Marc Khoury
Dylan Hadfield-Menell
AAML
81
79
0
01 Nov 2018
On the Effectiveness of Interval Bound Propagation for Training
  Verifiably Robust Models
On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
Sven Gowal
Krishnamurthy Dvijotham
Robert Stanforth
Rudy Bunel
Chongli Qin
J. Uesato
Relja Arandjelović
Timothy A. Mann
Pushmeet Kohli
AAML
109
559
0
30 Oct 2018
Logit Pairing Methods Can Fool Gradient-Based Attacks
Logit Pairing Methods Can Fool Gradient-Based Attacks
Marius Mosbach
Maksym Andriushchenko
T. A. Trost
Matthias Hein
Dietrich Klakow
AAML
68
83
0
29 Oct 2018
Rademacher Complexity for Adversarially Robust Generalization
Rademacher Complexity for Adversarially Robust Generalization
Dong Yin
Kannan Ramchandran
Peter L. Bartlett
AAML
105
261
0
29 Oct 2018
Robust Adversarial Learning via Sparsifying Front Ends
Robust Adversarial Learning via Sparsifying Front Ends
S. Gopalakrishnan
Zhinus Marzi
Metehan Cekic
Upamanyu Madhow
Ramtin Pedarsani
AAML
58
3
0
24 Oct 2018
Stochastic Substitute Training: A Gray-box Approach to Craft Adversarial
  Examples Against Gradient Obfuscation Defenses
Stochastic Substitute Training: A Gray-box Approach to Craft Adversarial Examples Against Gradient Obfuscation Defenses
Mohammad J. Hashemi
Greg Cusack
Eric Keller
AAMLSILM
51
8
0
23 Oct 2018
Previous
123...3536373839
Next