ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXivPDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

33 / 733 papers shown
Title
Curriculum Adversarial Training
Curriculum Adversarial Training
Qi-Zhi Cai
Min Du
Chang-rui Liu
D. Song
AAML
24
160
0
13 May 2018
Deep Nets: What have they ever done for Vision?
Deep Nets: What have they ever done for Vision?
Alan Yuille
Chenxi Liu
25
100
0
10 May 2018
Verisimilar Percept Sequences Tests for Autonomous Driving Intelligent
  Agent Assessment
Verisimilar Percept Sequences Tests for Autonomous Driving Intelligent Agent Assessment
Thomio Watanabe
D. Wolf
19
8
0
07 May 2018
Adversarially Robust Generalization Requires More Data
Adversarially Robust Generalization Requires More Data
Ludwig Schmidt
Shibani Santurkar
Dimitris Tsipras
Kunal Talwar
A. Madry
OOD
AAML
25
785
0
30 Apr 2018
VectorDefense: Vectorization as a Defense to Adversarial Examples
VectorDefense: Vectorization as a Defense to Adversarial Examples
V. Kabilan
Brandon L. Morris
Anh Totti Nguyen
AAML
22
21
0
23 Apr 2018
ADef: an Iterative Algorithm to Construct Adversarial Deformations
ADef: an Iterative Algorithm to Construct Adversarial Deformations
Rima Alaifari
Giovanni S. Alberti
Tandri Gauksson
AAML
25
96
0
20 Apr 2018
ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object
  Detector
ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector
Shang-Tse Chen
Cory Cornelius
Jason Martin
Duen Horng Chau
ObjD
165
424
0
16 Apr 2018
Adversarial Attacks Against Medical Deep Learning Systems
Adversarial Attacks Against Medical Deep Learning Systems
S. G. Finlayson
Hyung Won Chung
I. Kohane
Andrew L. Beam
SILM
AAML
OOD
MedIm
25
230
0
15 Apr 2018
An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural
  Networks
An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks
Pu Zhao
Sijia Liu
Yanzhi Wang
X. Lin
AAML
17
37
0
09 Apr 2018
Fortified Networks: Improving the Robustness of Deep Networks by
  Modeling the Manifold of Hidden Representations
Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations
Alex Lamb
Jonathan Binas
Anirudh Goyal
Dmitriy Serdyuk
Sandeep Subramanian
Ioannis Mitliagkas
Yoshua Bengio
OOD
34
43
0
07 Apr 2018
On the Limitation of Local Intrinsic Dimensionality for Characterizing
  the Subspaces of Adversarial Examples
On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples
Pei-Hsuan Lu
Pin-Yu Chen
Chia-Mu Yu
AAML
11
26
0
26 Mar 2018
Adversarial Defense based on Structure-to-Signal Autoencoders
Adversarial Defense based on Structure-to-Signal Autoencoders
Joachim Folz
Sebastián M. Palacio
Jörn Hees
Damian Borth
Andreas Dengel
AAML
26
32
0
21 Mar 2018
Adversarial Logit Pairing
Adversarial Logit Pairing
Harini Kannan
Alexey Kurakin
Ian Goodfellow
AAML
36
625
0
16 Mar 2018
Semantic Adversarial Examples
Semantic Adversarial Examples
Hossein Hosseini
Radha Poovendran
GAN
AAML
31
196
0
16 Mar 2018
Understanding and Enhancing the Transferability of Adversarial Examples
Understanding and Enhancing the Transferability of Adversarial Examples
Lei Wu
Zhanxing Zhu
Cheng Tai
E. Weinan
AAML
SILM
30
96
0
27 Feb 2018
Robust GANs against Dishonest Adversaries
Robust GANs against Dishonest Adversaries
Zhi Xu
Chengtao Li
Stefanie Jegelka
AAML
34
3
0
27 Feb 2018
L2-Nonexpansive Neural Networks
L2-Nonexpansive Neural Networks
Haifeng Qian
M. Wegman
25
74
0
22 Feb 2018
Shield: Fast, Practical Defense and Vaccination for Deep Learning using
  JPEG Compression
Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression
Nilaksh Das
Madhuri Shanbhogue
Shang-Tse Chen
Fred Hohman
Siwei Li
Li-Wei Chen
Michael E. Kounavis
Duen Horng Chau
FedML
AAML
45
225
0
19 Feb 2018
Are Generative Classifiers More Robust to Adversarial Attacks?
Are Generative Classifiers More Robust to Adversarial Attacks?
Yingzhen Li
John Bradshaw
Yash Sharma
AAML
57
78
0
19 Feb 2018
Secure Detection of Image Manipulation by means of Random Feature
  Selection
Secure Detection of Image Manipulation by means of Random Feature Selection
Z. Chen
B. Tondi
Xiaolong Li
R. Ni
Yao-Min Zhao
Mauro Barni
AAML
25
33
0
02 Feb 2018
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Nicholas Carlini
D. Wagner
AAML
38
1,074
0
05 Jan 2018
A General Framework for Adversarial Examples with Objectives
A General Framework for Adversarial Examples with Objectives
Mahmood Sharif
Sruti Bhagavatula
Lujo Bauer
Michael K. Reiter
AAML
GAN
13
191
0
31 Dec 2017
The Robust Manifold Defense: Adversarial Training using Generative
  Models
The Robust Manifold Defense: Adversarial Training using Generative Models
A. Jalal
Andrew Ilyas
C. Daskalakis
A. Dimakis
AAML
31
174
0
26 Dec 2017
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
40
1,390
0
08 Dec 2017
Generative Adversarial Perturbations
Generative Adversarial Perturbations
Omid Poursaeed
Isay Katsman
Bicheng Gao
Serge J. Belongie
AAML
GAN
WIGM
31
351
0
06 Dec 2017
Towards Robust Neural Networks via Random Self-ensemble
Towards Robust Neural Networks via Random Self-ensemble
Xuanqing Liu
Minhao Cheng
Huan Zhang
Cho-Jui Hsieh
FedML
AAML
58
418
0
02 Dec 2017
Reinforcing Adversarial Robustness using Model Confidence Induced by
  Adversarial Training
Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training
Xi Wu
Uyeong Jang
Jiefeng Chen
Lingjiao Chen
S. Jha
AAML
35
21
0
21 Nov 2017
Evaluating Robustness of Neural Networks with Mixed Integer Programming
Evaluating Robustness of Neural Networks with Mixed Integer Programming
Vincent Tjeng
Kai Y. Xiao
Russ Tedrake
AAML
52
117
0
20 Nov 2017
Adversarial Attacks Beyond the Image Space
Adversarial Attacks Beyond the Image Space
Fangyin Wei
Chenxi Liu
Yu-Siang Wang
Weichao Qiu
Lingxi Xie
Yu-Wing Tai
Chi-Keung Tang
Alan Yuille
AAML
41
145
0
20 Nov 2017
Provably Minimally-Distorted Adversarial Examples
Provably Minimally-Distorted Adversarial Examples
Nicholas Carlini
Guy Katz
Clark W. Barrett
D. Dill
AAML
17
89
0
29 Sep 2017
Ensemble Adversarial Training: Attacks and Defenses
Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr
Alexey Kurakin
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
65
2,699
0
19 May 2017
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
296
3,113
0
04 Nov 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
308
5,847
0
08 Jul 2016
Previous
123...131415