ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
v1v2v3v4 (latest)

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXiv (abs)PDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 1,929 papers shown
Title
Understanding Robustness in Teacher-Student Setting: A New Perspective
Understanding Robustness in Teacher-Student Setting: A New Perspective
Zhuolin Yang
Zhaoxi Chen
Tiffany Cai
Xinyun Chen
Yue Liu
Yuandong Tian
AAML
55
2
0
25 Feb 2021
Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints
Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints
Maura Pintor
Fabio Roli
Wieland Brendel
Battista Biggio
AAML
92
73
0
25 Feb 2021
Identifying Untrustworthy Predictions in Neural Networks by Geometric
  Gradient Analysis
Identifying Untrustworthy Predictions in Neural Networks by Geometric Gradient Analysis
Leo Schwinn
A. Nguyen
René Raab
Leon Bungert
Daniel Tenbrinck
Dario Zanca
Martin Burger
Bjoern M. Eskofier
AAML
42
16
0
24 Feb 2021
Multiplicative Reweighting for Robust Neural Network Optimization
Multiplicative Reweighting for Robust Neural Network Optimization
Noga Bar
Tomer Koren
Raja Giryes
OODNoLa
85
9
0
24 Feb 2021
Automated Discovery of Adaptive Attacks on Adversarial Defenses
Automated Discovery of Adaptive Attacks on Adversarial Defenses
Chengyuan Yao
Pavol Bielik
Petar Tsankov
Martin Vechev
AAML
99
24
0
23 Feb 2021
On the robustness of randomized classifiers to adversarial examples
On the robustness of randomized classifiers to adversarial examples
Rafael Pinot
Laurent Meunier
Florian Yger
Cédric Gouy-Pailler
Y. Chevaleyre
Jamal Atif
AAML
75
14
0
22 Feb 2021
Effective and Efficient Vote Attack on Capsule Networks
Effective and Efficient Vote Attack on Capsule Networks
Jindong Gu
Baoyuan Wu
Volker Tresp
AAML
70
27
0
19 Feb 2021
Center Smoothing: Certified Robustness for Networks with Structured
  Outputs
Center Smoothing: Certified Robustness for Networks with Structured Outputs
Aounon Kumar
Tom Goldstein
OODAAMLUQCV
84
19
0
19 Feb 2021
Towards Adversarial-Resilient Deep Neural Networks for False Data
  Injection Attack Detection in Power Grids
Towards Adversarial-Resilient Deep Neural Networks for False Data Injection Attack Detection in Power Grids
Jiangnan Li
Yingyuan Yang
Jinyuan Stella Sun
K. Tomsovic
Hairong Qi
AAML
127
15
0
17 Feb 2021
Bridging the Gap Between Adversarial Robustness and Optimization Bias
Bridging the Gap Between Adversarial Robustness and Optimization Bias
Fartash Faghri
Sven Gowal
C. N. Vasconcelos
David J. Fleet
Fabian Pedregosa
Nicolas Le Roux
AAML
234
7
0
17 Feb 2021
Low Curvature Activations Reduce Overfitting in Adversarial Training
Low Curvature Activations Reduce Overfitting in Adversarial Training
Vasu Singla
Sahil Singla
David Jacobs
Soheil Feizi
AAML
102
47
0
15 Feb 2021
Certifiably Robust Variational Autoencoders
Certifiably Robust Variational Autoencoders
Ben Barrett
A. Camuto
M. Willetts
Tom Rainforth
AAMLDRL
88
17
0
15 Feb 2021
Data Quality Matters For Adversarial Training: An Empirical Study
Data Quality Matters For Adversarial Training: An Empirical Study
Chengyu Dong
Liyuan Liu
Jingbo Shang
AAML
59
10
0
15 Feb 2021
Generating Structured Adversarial Attacks Using Frank-Wolfe Method
Generating Structured Adversarial Attacks Using Frank-Wolfe Method
Ehsan Kazemi
Thomas Kerdreux
Liquang Wang
AAMLDiffM
53
1
0
15 Feb 2021
CAP-GAN: Towards Adversarial Robustness with Cycle-consistent
  Attentional Purification
CAP-GAN: Towards Adversarial Robustness with Cycle-consistent Attentional Purification
Mingu Kang
T. Tran
Seungju Cho
Daeyoung Kim
AAML
49
3
0
15 Feb 2021
Resilient Machine Learning for Networked Cyber Physical Systems: A
  Survey for Machine Learning Security to Securing Machine Learning for CPS
Resilient Machine Learning for Networked Cyber Physical Systems: A Survey for Machine Learning Security to Securing Machine Learning for CPS
Felix O. Olowononi
D. Rawat
Chunmei Liu
95
138
0
14 Feb 2021
Perceptually Constrained Adversarial Attacks
Perceptually Constrained Adversarial Attacks
Muhammad Zaid Hameed
András Gyorgy
60
12
0
14 Feb 2021
Universal Adversarial Perturbations Through the Lens of Deep
  Steganography: Towards A Fourier Perspective
Universal Adversarial Perturbations Through the Lens of Deep Steganography: Towards A Fourier Perspective
Chaoning Zhang
Philipp Benz
Adil Karjauv
In So Kweon
AAML
97
42
0
12 Feb 2021
Towards Certifying L-infinity Robustness using Neural Networks with
  L-inf-dist Neurons
Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons
Bohang Zhang
Tianle Cai
Zhou Lu
Di He
Liwei Wang
OOD
92
51
0
10 Feb 2021
CIFS: Improving Adversarial Robustness of CNNs via Channel-wise
  Importance-based Feature Selection
CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection
Hanshu Yan
Jingfeng Zhang
Gang Niu
Jiashi Feng
Vincent Y. F. Tan
Masashi Sugiyama
AAML
49
42
0
10 Feb 2021
Bayesian Inference with Certifiable Adversarial Robustness
Bayesian Inference with Certifiable Adversarial Robustness
Matthew Wicker
Luca Laurenti
A. Patané
Zhoutong Chen
Zheng Zhang
Marta Z. Kwiatkowska
AAMLBDL
142
30
0
10 Feb 2021
Detecting Localized Adversarial Examples: A Generic Approach using
  Critical Region Analysis
Detecting Localized Adversarial Examples: A Generic Approach using Critical Region Analysis
Fengting Li
Xuankai Liu
Xiaoli Zhang
Qi Li
Kun Sun
Kang Li
AAML
73
13
0
10 Feb 2021
"What's in the box?!": Deflecting Adversarial Attacks by Randomly
  Deploying Adversarially-Disjoint Models
"What's in the box?!": Deflecting Adversarial Attacks by Randomly Deploying Adversarially-Disjoint Models
Sahar Abdelnabi
Mario Fritz
AAML
44
7
0
09 Feb 2021
Towards Bridging the gap between Empirical and Certified Robustness
  against Adversarial Examples
Towards Bridging the gap between Empirical and Certified Robustness against Adversarial Examples
Jay Nandy
Sudipan Saha
Wynne Hsu
Mong Li Lee
Xiaosu Zhu
AAML
82
4
0
09 Feb 2021
Target Training Does Adversarial Training Without Adversarial Samples
Target Training Does Adversarial Training Without Adversarial Samples
Blerta Lindqvist
AAML
25
0
0
09 Feb 2021
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial
  Training
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training
Lue Tao
Lei Feng
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
143
73
0
09 Feb 2021
Efficient Certified Defenses Against Patch Attacks on Image Classifiers
Efficient Certified Defenses Against Patch Attacks on Image Classifiers
J. H. Metzen
Maksym Yatsura
AAML
61
41
0
08 Feb 2021
Meta-Learning with Neural Tangent Kernels
Meta-Learning with Neural Tangent Kernels
Yufan Zhou
Zhenyi Wang
Jiayi Xian
Changyou Chen
Jinhui Xu
62
20
0
07 Feb 2021
Adversarial Imaging Pipelines
Adversarial Imaging Pipelines
Buu Phan
Fahim Mannan
Felix Heide
AAML
56
26
0
07 Feb 2021
Noise Optimization for Artificial Neural Networks
Noise Optimization for Artificial Neural Networks
Li Xiao
Zeliang Zhang
Yijie Peng
113
14
0
06 Feb 2021
Understanding the Interaction of Adversarial Training with Noisy Labels
Understanding the Interaction of Adversarial Training with Noisy Labels
Jianing Zhu
Jingfeng Zhang
Bo Han
Tongliang Liu
Gang Niu
Hongxia Yang
Mohan Kankanhalli
Masashi Sugiyama
AAML
97
27
0
06 Feb 2021
Robust Single-step Adversarial Training with Regularizer
Robust Single-step Adversarial Training with Regularizer
Lehui Xie
Yaopeng Wang
Jianwei Yin
Ximeng Liu
AAML
59
1
0
05 Feb 2021
Optimal Transport as a Defense Against Adversarial Attacks
Optimal Transport as a Defense Against Adversarial Attacks
Quentin Bouniot
Romaric Audigier
Angélique Loesch
AAMLOOD
32
9
0
05 Feb 2021
DetectorGuard: Provably Securing Object Detectors against Localized
  Patch Hiding Attacks
DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks
Chong Xiang
Prateek Mittal
AAML
113
53
0
05 Feb 2021
Adversarially Robust Learning with Unknown Perturbation Sets
Adversarially Robust Learning with Unknown Perturbation Sets
Omar Montasser
Steve Hanneke
Nathan Srebro
AAML
85
28
0
03 Feb 2021
IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural
  Networks
IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural Networks
Yixiang Wang
Jiqiang Liu
Xiaolin Chang
J. Misic
Vojislav B. Mišić
AAML
69
12
0
03 Feb 2021
Fast Training of Provably Robust Neural Networks by SingleProp
Fast Training of Provably Robust Neural Networks by SingleProp
Akhilan Boopathy
Tsui-Wei Weng
Sijia Liu
Pin-Yu Chen
Gaoyuan Zhang
Luca Daniel
AAML
57
7
0
01 Feb 2021
Admix: Enhancing the Transferability of Adversarial Attacks
Admix: Enhancing the Transferability of Adversarial Attacks
Xiaosen Wang
Xu He
Jingdong Wang
Kun He
AAML
153
201
0
31 Jan 2021
Adversarial Learning with Cost-Sensitive Classes
Adversarial Learning with Cost-Sensitive Classes
Hao Shen
Sihong Chen
Ran Wang
Xizhao Wang
AAML
77
11
0
29 Jan 2021
Investigating the significance of adversarial attacks and their relation
  to interpretability for radar-based human activity recognition systems
Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
Utku Ozbulak
Baptist Vandersmissen
A. Jalalvand
Ivo Couckuyt
Arnout Van Messem
W. D. Neve
AAML
31
19
0
26 Jan 2021
Understanding and Achieving Efficient Robustness with Adversarial
  Supervised Contrastive Learning
Understanding and Achieving Efficient Robustness with Adversarial Supervised Contrastive Learning
Anh-Vu Bui
Trung Le
He Zhao
Paul Montague
S. Çamtepe
Dinh Q. Phung
AAML
53
14
0
25 Jan 2021
A Comprehensive Evaluation Framework for Deep Model Robustness
A Comprehensive Evaluation Framework for Deep Model Robustness
Jun Guo
Wei Bao
Jiakai Wang
Yuqing Ma
Xing Gao
Gang Xiao
Aishan Liu
Zehao Zhao
Xianglong Liu
Wenjun Wu
AAMLELM
97
61
0
24 Jan 2021
Error Diffusion Halftoning Against Adversarial Examples
Error Diffusion Halftoning Against Adversarial Examples
Shao-Yuan Lo
Vishal M. Patel
DiffM
61
4
0
23 Jan 2021
Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
  Self Driving
Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving
James Tu
Huichen Li
Xinchen Yan
Mengye Ren
Yun Chen
Ming Liang
E. Bitar
Ersin Yumer
R. Urtasun
AAML
91
78
0
17 Jan 2021
Removing Undesirable Feature Contributions Using Out-of-Distribution
  Data
Removing Undesirable Feature Contributions Using Out-of-Distribution Data
Saehyung Lee
Changhwa Park
Hyungyu Lee
Jihun Yi
Jonghyun Lee
Sungroh Yoon
OODD
102
26
0
17 Jan 2021
Multi-objective Search of Robust Neural Architectures against Multiple
  Types of Adversarial Attacks
Multi-objective Search of Robust Neural Architectures against Multiple Types of Adversarial Attacks
Jia-Wei Liu
Yaochu Jin
AAMLOOD
75
38
0
16 Jan 2021
Fundamental Tradeoffs in Distributionally Adversarial Training
Fundamental Tradeoffs in Distributionally Adversarial Training
M. Mehrabi
Adel Javanmard
Ryan A. Rossi
Anup B. Rao
Tung Mai
AAML
55
18
0
15 Jan 2021
On the Effectiveness of Small Input Noise for Defending Against
  Query-based Black-Box Attacks
On the Effectiveness of Small Input Noise for Defending Against Query-based Black-Box Attacks
Junyoung Byun
Hyojun Go
Changick Kim
AAML
193
21
0
13 Jan 2021
The Vulnerability of Semantic Segmentation Networks to Adversarial
  Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
Andreas Bär
Jonas Löhdefink
Nikhil Kapoor
Serin Varghese
Fabian Hüger
Peter Schlicht
Tim Fingscheidt
AAML
192
35
0
11 Jan 2021
DiPSeN: Differentially Private Self-normalizing Neural Networks For
  Adversarial Robustness in Federated Learning
DiPSeN: Differentially Private Self-normalizing Neural Networks For Adversarial Robustness in Federated Learning
Olakunle Ibitoye
M. O. Shafiq
Ashraf Matrawy
FedML
55
19
0
08 Jan 2021
Previous
123...202122...373839
Next