ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
v1v2v3v4 (latest)

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXiv (abs)PDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 1,929 papers shown
Title
Vulnerabilities of Connectionist AI Applications: Evaluation and Defence
Vulnerabilities of Connectionist AI Applications: Evaluation and Defence
Christian Berghoff
Matthias Neu
Arndt von Twickel
AAML
109
25
0
18 Mar 2020
Anomalous Example Detection in Deep Learning: A Survey
Anomalous Example Detection in Deep Learning: A Survey
Saikiran Bulusu
B. Kailkhura
Yue Liu
P. Varshney
Basel Alomair
AAML
163
47
0
16 Mar 2020
Towards Face Encryption by Generating Adversarial Identity Masks
Towards Face Encryption by Generating Adversarial Identity Masks
Xiao Yang
Yinpeng Dong
Tianyu Pang
Hang Su
Jun Zhu
YueFeng Chen
H. Xue
AAMLPICV
151
75
0
15 Mar 2020
Certified Defenses for Adversarial Patches
Certified Defenses for Adversarial Patches
Ping Yeh-Chiang
Renkun Ni
Ahmed Abdelkader
Chen Zhu
Christoph Studer
Tom Goldstein
AAML
67
172
0
14 Mar 2020
Minimum-Norm Adversarial Examples on KNN and KNN-Based Models
Minimum-Norm Adversarial Examples on KNN and KNN-Based Models
Chawin Sitawarin
David Wagner
AAML
67
20
0
14 Mar 2020
Topological Effects on Attacks Against Vertex Classification
Topological Effects on Attacks Against Vertex Classification
B. A. Miller
Mustafa Çamurcu
Alexander J. Gomez
Kevin S. Chan
Tina Eliassi-Rad
AAML
46
2
0
12 Mar 2020
Manifold Regularization for Locally Stable Deep Neural Networks
Manifold Regularization for Locally Stable Deep Neural Networks
Charles Jin
Martin Rinard
AAML
94
15
0
09 Mar 2020
On the Robustness of Cooperative Multi-Agent Reinforcement Learning
On the Robustness of Cooperative Multi-Agent Reinforcement Learning
Jieyu Lin
Kristina Dzeparoska
Shanghang Zhang
A. Leon-Garcia
Nicolas Papernot
AAML
132
69
0
08 Mar 2020
Adversarial Machine Learning: Bayesian Perspectives
Adversarial Machine Learning: Bayesian Perspectives
D. Insua
Roi Naveiro
Víctor Gallego
Jason Poulos
AAML
27
21
0
07 Mar 2020
Exploiting Verified Neural Networks via Floating Point Numerical Error
Exploiting Verified Neural Networks via Floating Point Numerical Error
Kai Jia
Martin Rinard
AAML
97
37
0
06 Mar 2020
Confusing and Detecting ML Adversarial Attacks with Injected Attractors
Confusing and Detecting ML Adversarial Attacks with Injected Attractors
Jiyi Zhang
E. Chang
H. Lee
AAML
60
1
0
05 Mar 2020
Adversarial Vertex Mixup: Toward Better Adversarially Robust
  Generalization
Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization
Saehyung Lee
Hyungyu Lee
Sungroh Yoon
AAML
252
119
0
05 Mar 2020
Colored Noise Injection for Training Adversarially Robust Neural
  Networks
Colored Noise Injection for Training Adversarially Robust Neural Networks
Evgenii Zheltonozhskii
Chaim Baskin
Yaniv Nemcovsky
Brian Chmiel
A. Mendelson
A. Bronstein
AAML
32
5
0
04 Mar 2020
Deep Neural Network Perception Models and Robust Autonomous Driving
  Systems
Deep Neural Network Perception Models and Robust Autonomous Driving Systems
M. Shafiee
Ahmadreza Jeddi
Amir Nazemi
Paul Fieguth
A. Wong
OOD
62
16
0
04 Mar 2020
Denoised Smoothing: A Provable Defense for Pretrained Classifiers
Denoised Smoothing: A Provable Defense for Pretrained Classifiers
Hadi Salman
Mingjie Sun
Greg Yang
Ashish Kapoor
J. Zico Kolter
94
23
0
04 Mar 2020
Reliable evaluation of adversarial robustness with an ensemble of
  diverse parameter-free attacks
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Francesco Croce
Matthias Hein
AAML
302
1,866
0
03 Mar 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
  Adversarial Robustness
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OODAAML
129
67
0
02 Mar 2020
Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism
  Principled Robust Deep Neural Nets
Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets
Thu Dinh
Bao Wang
Andrea L. Bertozzi
Stanley J. Osher
AAML
34
17
0
02 Mar 2020
Understanding the Intrinsic Robustness of Image Distributions using
  Conditional Generative Models
Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models
Xiao Zhang
Jinghui Chen
Quanquan Gu
David Evans
76
17
0
01 Mar 2020
Improving Certified Robustness via Statistical Learning with Logical
  Reasoning
Improving Certified Robustness via Statistical Learning with Logical Reasoning
Zhuolin Yang
Zhikuan Zhao
Wei Ping
Jiawei Zhang
Linyi Li
...
Bojan Karlas
Ji Liu
Heng Guo
Ce Zhang
Yue Liu
AAML
140
13
0
28 Feb 2020
Are L2 adversarial examples intrinsically different?
Are L2 adversarial examples intrinsically different?
Mingxuan Li
Jingyuan Wang
Yufan Wu
AAML
16
0
0
28 Feb 2020
Detecting Patch Adversarial Attacks with Image Residuals
Detecting Patch Adversarial Attacks with Image Residuals
Marius Arvinte
Ahmed H. Tewfik
S. Vishwanath
AAML
29
5
0
28 Feb 2020
TSS: Transformation-Specific Smoothing for Robustness Certification
TSS: Transformation-Specific Smoothing for Robustness Certification
Linyi Li
Maurice Weber
Xiaojun Xu
Luka Rimanic
B. Kailkhura
Tao Xie
Ce Zhang
Yue Liu
AAML
147
57
0
27 Feb 2020
On Isometry Robustness of Deep 3D Point Cloud Models under Adversarial
  Attacks
On Isometry Robustness of Deep 3D Point Cloud Models under Adversarial Attacks
Yue Zhao
Yuwei Wu
Caihua Chen
A. Lim
3DPC
97
72
0
27 Feb 2020
Improving Robustness of Deep-Learning-Based Image Reconstruction
Improving Robustness of Deep-Learning-Based Image Reconstruction
Ankit Raj
Y. Bresler
Yue Liu
OODAAML
96
51
0
26 Feb 2020
Revisiting Ensembles in an Adversarial Context: Improving Natural
  Accuracy
Revisiting Ensembles in an Adversarial Context: Improving Natural Accuracy
Aditya Saligrama
Guillaume Leclerc
AAML
25
1
0
26 Feb 2020
Overfitting in adversarially robust deep learning
Overfitting in adversarially robust deep learning
Leslie Rice
Eric Wong
Zico Kolter
167
812
0
26 Feb 2020
Randomization matters. How to defend against strong adversarial attacks
Randomization matters. How to defend against strong adversarial attacks
Rafael Pinot
Raphael Ettedgui
Geovani Rizk
Y. Chevaleyre
Jamal Atif
AAML
130
60
0
26 Feb 2020
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient
  Shaping
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Sanghyun Hong
Varun Chandrasekaran
Yigitcan Kaya
Tudor Dumitras
Nicolas Papernot
AAML
90
137
0
26 Feb 2020
Can we have it all? On the Trade-off between Spatial and Adversarial
  Robustness of Neural Networks
Can we have it all? On the Trade-off between Spatial and Adversarial Robustness of Neural Networks
Sandesh Kamath
Amit Deshpande
Subrahmanyam Kambhampati Venkata
V. Balasubramanian
88
12
0
26 Feb 2020
Adversarial Ranking Attack and Defense
Adversarial Ranking Attack and Defense
Mo Zhou
Zhenxing Niu
Le Wang
Qilin Zhang
G. Hua
150
39
0
26 Feb 2020
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Jingfeng Zhang
Xilie Xu
Bo Han
Gang Niu
Li-zhen Cui
Masashi Sugiyama
Mohan S. Kankanhalli
AAML
69
406
0
26 Feb 2020
Wireless Fingerprinting via Deep Learning: The Impact of Confounding
  Factors
Wireless Fingerprinting via Deep Learning: The Impact of Confounding Factors
Metehan Cekic
S. Gopalakrishnan
Upamanyu Madhow
41
11
0
25 Feb 2020
HYDRA: Pruning Adversarially Robust Neural Networks
HYDRA: Pruning Adversarially Robust Neural Networks
Vikash Sehwag
Shiqi Wang
Prateek Mittal
Suman Jana
AAML
69
25
0
24 Feb 2020
Lagrangian Decomposition for Neural Network Verification
Lagrangian Decomposition for Neural Network Verification
Rudy Bunel
Alessandro De Palma
Alban Desmaison
Krishnamurthy Dvijotham
Pushmeet Kohli
Philip Torr
M. P. Kumar
81
50
0
24 Feb 2020
Using Single-Step Adversarial Training to Defend Iterative Adversarial
  Examples
Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples
Guanxiong Liu
Issa M. Khalil
Abdallah Khreishah
AAML
58
19
0
22 Feb 2020
Polarizing Front Ends for Robust CNNs
Polarizing Front Ends for Robust CNNs
Can Bakiskan
S. Gopalakrishnan
Metehan Cekic
Upamanyu Madhow
Ramtin Pedarsani
AAML
45
4
0
22 Feb 2020
UnMask: Adversarial Detection and Defense Through Robust Feature
  Alignment
UnMask: Adversarial Detection and Defense Through Robust Feature Alignment
Scott Freitas
Shang-Tse Chen
Zijie J. Wang
Duen Horng Chau
AAML
61
23
0
21 Feb 2020
Robustness from Simple Classifiers
Robustness from Simple Classifiers
Sharon Qian
Dimitris Kalimeris
Gal Kaplun
Yaron Singer
AAML
18
1
0
21 Feb 2020
Black-Box Certification with Randomized Smoothing: A Functional
  Optimization Based Framework
Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework
Dinghuai Zhang
Mao Ye
Chengyue Gong
Zhanxing Zhu
Qiang Liu
AAML
99
64
0
21 Feb 2020
Boosting Adversarial Training with Hypersphere Embedding
Boosting Adversarial Training with Hypersphere Embedding
Tianyu Pang
Xiao Yang
Yinpeng Dong
Kun Xu
Jun Zhu
Hang Su
AAML
89
156
0
20 Feb 2020
AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks
AdvMS: A Multi-source Multi-cost Defense Against Adversarial Attacks
Tianlin Li
Siyue Wang
Pin-Yu Chen
Xinyu Lin
Peter Chin
AAML
44
3
0
19 Feb 2020
On Adaptive Attacks to Adversarial Example Defenses
On Adaptive Attacks to Adversarial Example Defenses
Florian Tramèr
Nicholas Carlini
Wieland Brendel
Aleksander Madry
AAML
297
840
0
19 Feb 2020
Bayes-TrEx: a Bayesian Sampling Approach to Model Transparency by
  Example
Bayes-TrEx: a Bayesian Sampling Approach to Model Transparency by Example
Serena Booth
Yilun Zhou
Ankit J. Shah
J. Shah
BDL
42
2
0
19 Feb 2020
Randomized Smoothing of All Shapes and Sizes
Randomized Smoothing of All Shapes and Sizes
Greg Yang
Tony Duan
J. E. Hu
Hadi Salman
Ilya P. Razenshteyn
Jungshian Li
AAML
103
216
0
19 Feb 2020
Indirect Adversarial Attacks via Poisoning Neighbors for Graph
  Convolutional Networks
Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks
Tsubasa Takahashi
GNNAAML
150
37
0
19 Feb 2020
Deflecting Adversarial Attacks
Deflecting Adversarial Attacks
Yao Qin
Nicholas Frosst
Colin Raffel
G. Cottrell
Geoffrey E. Hinton
AAML
64
15
0
18 Feb 2020
Regularized Training and Tight Certification for Randomized Smoothed
  Classifier with Provable Robustness
Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness
Huijie Feng
Chunpeng Wu
Guoyang Chen
Weifeng Zhang
Y. Ning
AAML
71
11
0
17 Feb 2020
CAT: Customized Adversarial Training for Improved Robustness
CAT: Customized Adversarial Training for Improved Robustness
Minhao Cheng
Qi Lei
Pin-Yu Chen
Inderjit Dhillon
Cho-Jui Hsieh
OODAAML
102
117
0
17 Feb 2020
Over-parameterized Adversarial Training: An Analysis Overcoming the
  Curse of Dimensionality
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality
Yi Zhang
Orestis Plevrakis
S. Du
Xingguo Li
Zhao Song
Sanjeev Arora
127
53
0
16 Feb 2020
Previous
123...272829...373839
Next