ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
v1v2v3v4 (latest)

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXiv (abs)PDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 1,929 papers shown
Title
Adversarial Distributional Training for Robust Deep Learning
Adversarial Distributional Training for Robust Deep Learning
Yinpeng Dong
Zhijie Deng
Tianyu Pang
Hang Su
Jun Zhu
OOD
96
123
0
14 Feb 2020
CEB Improves Model Robustness
CEB Improves Model Robustness
Ian S. Fischer
Alexander A. Alemi
AAML
137
30
0
13 Feb 2020
The Conditional Entropy Bottleneck
The Conditional Entropy Bottleneck
Ian S. Fischer
OOD
125
122
0
13 Feb 2020
More Data Can Expand the Generalization Gap Between Adversarially Robust
  and Standard Models
More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models
Lin Chen
Yifei Min
Mingrui Zhang
Amin Karbasi
OOD
88
64
0
11 Feb 2020
Robustness of Bayesian Neural Networks to Gradient-Based Attacks
Robustness of Bayesian Neural Networks to Gradient-Based Attacks
Ginevra Carbone
Matthew Wicker
Luca Laurenti
A. Patané
Luca Bortolussi
G. Sanguinetti
AAML
104
79
0
11 Feb 2020
Improving the affordability of robustness training for DNNs
Improving the affordability of robustness training for DNNs
Sidharth Gupta
Parijat Dube
Ashish Verma
AAML
57
15
0
11 Feb 2020
Generalised Lipschitz Regularisation Equals Distributional Robustness
Generalised Lipschitz Regularisation Equals Distributional Robustness
Zac Cranko
Zhan Shi
Xinhua Zhang
Richard Nock
Simon Kornblith
OOD
86
21
0
11 Feb 2020
Adversarial Attacks on Linear Contextual Bandits
Adversarial Attacks on Linear Contextual Bandits
Evrard Garcelon
Baptiste Roziere
Laurent Meunier
Jean Tarbouriech
O. Teytaud
A. Lazaric
Matteo Pirotta
AAML
84
51
0
10 Feb 2020
Category-wise Attack: Transferable Adversarial Examples for Anchor Free
  Object Detection
Category-wise Attack: Transferable Adversarial Examples for Anchor Free Object Detection
Quanyu Liao
Xin Wang
Bin Kong
Siwei Lyu
Youbing Yin
Qi Song
Xi Wu
AAML
94
8
0
10 Feb 2020
Random Smoothing Might be Unable to Certify $\ell_\infty$ Robustness for
  High-Dimensional Images
Random Smoothing Might be Unable to Certify ℓ∞\ell_\inftyℓ∞​ Robustness for High-Dimensional Images
Avrim Blum
Travis Dick
N. Manoj
Hongyang R. Zhang
AAML
81
79
0
10 Feb 2020
Certified Robustness of Community Detection against Adversarial
  Structural Perturbation via Randomized Smoothing
Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing
Jinyuan Jia
Binghui Wang
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
184
84
0
09 Feb 2020
Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to
  Adversarial Examples
Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples
Shehzeen Samarah Hussain
Paarth Neekhara
Malhar Jere
F. Koushanfar
Julian McAuley
AAML
100
154
0
09 Feb 2020
Curse of Dimensionality on Randomized Smoothing for Certifiable
  Robustness
Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness
Aounon Kumar
Alexander Levine
Tom Goldstein
Soheil Feizi
70
96
0
08 Feb 2020
Analysis of Random Perturbations for Robust Convolutional Neural
  Networks
Analysis of Random Perturbations for Robust Convolutional Neural Networks
Adam Dziedzic
S. Krishnan
OODAAML
70
1
0
08 Feb 2020
Semantic Robustness of Models of Source Code
Semantic Robustness of Models of Source Code
Goutham Ramakrishnan
Jordan Henkel
Zi Wang
Aws Albarghouthi
S. Jha
Thomas W. Reps
SILMAAML
109
98
0
07 Feb 2020
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
Elan Rosenfeld
Ezra Winston
Pradeep Ravikumar
J. Zico Kolter
OODAAML
94
159
0
07 Feb 2020
On the Robustness of Face Recognition Algorithms Against Attacks and
  Bias
On the Robustness of Face Recognition Algorithms Against Attacks and Bias
Richa Singh
Akshay Agarwal
Maneet Singh
Shruti Nagpal
Mayank Vatsa
CVBMAAML
134
66
0
07 Feb 2020
RAID: Randomized Adversarial-Input Detection for Neural Networks
RAID: Randomized Adversarial-Input Detection for Neural Networks
Hasan Ferit Eniser
M. Christakis
Valentin Wüstholz
AAML
69
15
0
07 Feb 2020
Understanding the Decision Boundary of Deep Neural Networks: An
  Empirical Study
Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study
David Mickisch
F. Assion
Florens Greßner
W. Günther
M. Motta
AAML
69
34
0
05 Feb 2020
Minimax Defense against Gradient-based Adversarial Attacks
Minimax Defense against Gradient-based Adversarial Attacks
Blerta Lindqvist
R. Izmailov
AAML
27
0
0
04 Feb 2020
Regularizers for Single-step Adversarial Training
Regularizers for Single-step Adversarial Training
S. VivekB.
R. Venkatesh Babu
AAML
56
7
0
03 Feb 2020
Towards Sharper First-Order Adversary with Quantized Gradients
Towards Sharper First-Order Adversary with Quantized Gradients
Zhuanghua Liu
Ivor W. Tsang
AAML
42
0
0
01 Feb 2020
Tiny noise, big mistakes: Adversarial perturbations induce errors in
  Brain-Computer Interface spellers
Tiny noise, big mistakes: Adversarial perturbations induce errors in Brain-Computer Interface spellers
Xiao Zhang
Dongrui Wu
L. Ding
Hanbin Luo
Chin-Teng Lin
T. Jung
Ricardo Chavarriaga
AAML
91
60
0
30 Jan 2020
Evaluating Robustness to Context-Sensitive Feature Perturbations of
  Different Granularities
Evaluating Robustness to Context-Sensitive Feature Perturbations of Different Granularities
Isaac Dunn
Laura Hanu
Hadrien Pouget
Daniel Kroening
T. Melham
AAML
79
2
0
29 Jan 2020
Safety Concerns and Mitigation Approaches Regarding the Use of Deep
  Learning in Safety-Critical Perception Tasks
Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks
Oliver Willers
Sebastian Sudholt
Shervin Raafatnia
Stephanie Abrecht
111
80
0
22 Jan 2020
A Review on Generative Adversarial Networks: Algorithms, Theory, and
  Applications
A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications
Jie Gui
Zhenan Sun
Yonggang Wen
Dacheng Tao
Jieping Ye
EGVM
109
846
0
20 Jan 2020
Distortion Agnostic Deep Watermarking
Distortion Agnostic Deep Watermarking
Xiyang Luo
Ruohan Zhan
Huiwen Chang
Feng Yang
P. Milanfar
WIGM
76
165
0
14 Jan 2020
Fast is better than free: Revisiting adversarial training
Fast is better than free: Revisiting adversarial training
Eric Wong
Leslie Rice
J. Zico Kolter
AAMLOOD
162
1,182
0
12 Jan 2020
Guess First to Enable Better Compression and Adversarial Robustness
Guess First to Enable Better Compression and Adversarial Robustness
Sicheng Zhu
Bang An
Shiyu Niu
AAML
42
0
0
10 Jan 2020
Sampling Prediction-Matching Examples in Neural Networks: A
  Probabilistic Programming Approach
Sampling Prediction-Matching Examples in Neural Networks: A Probabilistic Programming Approach
Serena Booth
Ankit J. Shah
Yilun Zhou
J. Shah
BDL
35
1
0
09 Jan 2020
MACER: Attack-free and Scalable Robust Training via Maximizing Certified
  Radius
MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius
Runtian Zhai
Chen Dan
Di He
Huan Zhang
Boqing Gong
Pradeep Ravikumar
Cho-Jui Hsieh
Liwei Wang
OODAAML
111
178
0
08 Jan 2020
The Human Visual System and Adversarial AI
The Human Visual System and Adversarial AI
Yaoshiang Ho
S. Wookey
28
2
0
05 Jan 2020
Empirical Studies on the Properties of Linear Regions in Deep Neural
  Networks
Empirical Studies on the Properties of Linear Regions in Deep Neural Networks
Xiao Zhang
Dongrui Wu
58
38
0
04 Jan 2020
Exploiting the Sensitivity of $L_2$ Adversarial Examples to
  Erase-and-Restore
Exploiting the Sensitivity of L2L_2L2​ Adversarial Examples to Erase-and-Restore
F. Zuo
Qiang Zeng
AAML
20
1
0
01 Jan 2020
Quantum Adversarial Machine Learning
Quantum Adversarial Machine Learning
Sirui Lu
L. Duan
D. Deng
AAML
115
102
0
31 Dec 2019
Adversarial Example Generation using Evolutionary Multi-objective
  Optimization
Adversarial Example Generation using Evolutionary Multi-objective Optimization
Takahiro Suzuki
Shingo Takeshita
S. Ono
AAML
66
22
0
30 Dec 2019
Benchmarking Adversarial Robustness
Benchmarking Adversarial Robustness
Yinpeng Dong
Qi-An Fu
Xiao Yang
Tianyu Pang
Hang Su
Zihao Xiao
Jun Zhu
AAML
108
36
0
26 Dec 2019
Certified Robustness for Top-k Predictions against Adversarial
  Perturbations via Randomized Smoothing
Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing
Jinyuan Jia
Xiaoyu Cao
Binghui Wang
Neil Zhenqiang Gong
AAML
60
96
0
20 Dec 2019
Malware Makeover: Breaking ML-based Static Analysis by Modifying
  Executable Bytes
Malware Makeover: Breaking ML-based Static Analysis by Modifying Executable Bytes
Keane Lucas
Mahmood Sharif
Lujo Bauer
Michael K. Reiter
S. Shintre
AAML
94
68
0
19 Dec 2019
$n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically
  Manipulated Classifiers
nnn-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Mahmood Sharif
Lujo Bauer
Michael K. Reiter
AAML
46
6
0
19 Dec 2019
MimicGAN: Robust Projection onto Image Manifolds with Corruption
  Mimicking
MimicGAN: Robust Projection onto Image Manifolds with Corruption Mimicking
Rushil Anirudh
Jayaraman J. Thiagarajan
B. Kailkhura
T. Bremer
AAML
69
44
0
16 Dec 2019
Constructing a provably adversarially-robust classifier from a high
  accuracy one
Constructing a provably adversarially-robust classifier from a high accuracy one
Grzegorz Gluch
R. Urbanke
AAML
47
2
0
16 Dec 2019
Detecting and Correcting Adversarial Images Using Image Processing
  Operations
Detecting and Correcting Adversarial Images Using Image Processing Operations
H. Nguyen
Minoru Kuribayashi
Junichi Yamagishi
Isao Echizen
AAML
55
1
0
11 Dec 2019
Advances and Open Problems in Federated Learning
Advances and Open Problems in Federated Learning
Peter Kairouz
H. B. McMahan
Brendan Avent
A. Bellet
M. Bennis
...
Zheng Xu
Qiang Yang
Felix X. Yu
Han Yu
Sen Zhao
FedMLAI4CE
298
6,343
0
10 Dec 2019
Training Provably Robust Models by Polyhedral Envelope Regularization
Training Provably Robust Models by Polyhedral Envelope Regularization
Chen Liu
Mathieu Salzmann
Sabine Süsstrunk
AAML
78
8
0
10 Dec 2019
Your Classifier is Secretly an Energy Based Model and You Should Treat
  it Like One
Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One
Will Grathwohl
Kuan-Chieh Wang
J. Jacobsen
David Duvenaud
Mohammad Norouzi
Kevin Swersky
VLM
108
548
0
06 Dec 2019
Adversarial Risk via Optimal Transport and Optimal Couplings
Adversarial Risk via Optimal Transport and Optimal Couplings
Muni Sreenivas Pydi
Varun Jog
85
60
0
05 Dec 2019
Perfectly Parallel Fairness Certification of Neural Networks
Perfectly Parallel Fairness Certification of Neural Networks
Caterina Urban
M. Christakis
Valentin Wüstholz
Fuyuan Zhang
105
72
0
05 Dec 2019
Towards Robust Image Classification Using Sequential Attention Models
Towards Robust Image Classification Using Sequential Attention Models
Daniel Zoran
Mike Chrzanowski
Po-Sen Huang
Sven Gowal
Alex Mott
Pushmeet Kohli
AAML
66
61
0
04 Dec 2019
A Method for Computing Class-wise Universal Adversarial Perturbations
A Method for Computing Class-wise Universal Adversarial Perturbations
Tejus Gupta
Abhishek Sinha
Nupur Kumari
M. Singh
Balaji Krishnamurthy
AAML
38
10
0
01 Dec 2019
Previous
123...282930...373839
Next