ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00420
  4. Cited By
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
v1v2v3v4 (latest)

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

1 February 2018
Anish Athalye
Nicholas Carlini
D. Wagner
    AAML
ArXiv (abs)PDFHTML

Papers citing "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples"

50 / 1,929 papers shown
Title
Adversarial Classification: Necessary conditions and geometric flows
Adversarial Classification: Necessary conditions and geometric flows
Nicolas García Trillos
Ryan W. Murray
AAML
104
19
0
21 Nov 2020
Contextual Fusion For Adversarial Robustness
Contextual Fusion For Adversarial Robustness
Aiswarya Akumalla
S. Haney
M. Bazhenov
AAML
31
1
0
18 Nov 2020
Adversarial Turing Patterns from Cellular Automata
Adversarial Turing Patterns from Cellular Automata
Nurislam Tursynbek
I. Vilkoviskiy
Maria Sindeeva
Ivan Oseledets
AAML
47
4
0
18 Nov 2020
Shaping Deep Feature Space towards Gaussian Mixture for Visual
  Classification
Shaping Deep Feature Space towards Gaussian Mixture for Visual Classification
Weitao Wan
Jiansheng Chen
Cheng Yu
Tong Wu
Yuanyi Zhong
Ming-Hsuan Yang
38
8
0
18 Nov 2020
Adversarially Robust Classification based on GLRT
Adversarially Robust Classification based on GLRT
Bhagyashree Puranik
Upamanyu Madhow
Ramtin Pedarsani
VLMAAML
58
4
0
16 Nov 2020
Ensemble of Models Trained by Key-based Transformed Images for
  Adversarially Robust Defense Against Black-box Attacks
Ensemble of Models Trained by Key-based Transformed Images for Adversarially Robust Defense Against Black-box Attacks
Maungmaung Aprilpyone
Hitoshi Kiya
FedML
46
1
0
16 Nov 2020
Almost Tight L0-norm Certified Robustness of Top-k Predictions against
  Adversarial Perturbations
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations
Jinyuan Jia
Binghui Wang
Xiaoyu Cao
Hongbin Liu
Neil Zhenqiang Gong
61
24
0
15 Nov 2020
Adversarial Image Color Transformations in Explicit Color Filter Space
Adversarial Image Color Transformations in Explicit Color Filter Space
Zhengyu Zhao
Zhuoran Liu
Martha Larson
AAML
110
14
0
12 Nov 2020
Efficient and Transferable Adversarial Examples from Bayesian Neural
  Networks
Efficient and Transferable Adversarial Examples from Bayesian Neural Networks
Martin Gubri
Maxime Cordy
Mike Papadakis
Yves Le Traon
Koushik Sen
AAML
151
11
0
10 Nov 2020
Risk Assessment for Machine Learning Models
Risk Assessment for Machine Learning Models
Paul Schwerdtner
Florens Greßner
Nikhil Kapoor
F. Assion
René Sass
W. Günther
Fabian Hüger
Peter Schlicht
38
6
0
09 Nov 2020
Bridging the Performance Gap between FGSM and PGD Adversarial Training
Bridging the Performance Gap between FGSM and PGD Adversarial Training
Tianjin Huang
Vlado Menkovski
Yulong Pei
Mykola Pechenizkiy
AAML
46
20
0
07 Nov 2020
A survey on practical adversarial examples for malware classifiers
A survey on practical adversarial examples for malware classifiers
Daniel Park
B. Yener
AAML
96
16
0
06 Nov 2020
Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush
  Deep Neural Network in Multi-Tenant FPGA
Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA
Adnan Siraj Rakin
Yukui Luo
Xiaolin Xu
Deliang Fan
AAML
87
51
0
05 Nov 2020
Dynamically Sampled Nonlocal Gradients for Stronger Adversarial Attacks
Dynamically Sampled Nonlocal Gradients for Stronger Adversarial Attacks
Leo Schwinn
An Nguyen
René Raab
Dario Zanca
Bjoern M. Eskofier
Daniel Tenbrinck
Martin Burger
AAML
59
9
0
05 Nov 2020
Detecting Backdoors in Neural Networks Using Novel Feature-Based Anomaly
  Detection
Detecting Backdoors in Neural Networks Using Novel Feature-Based Anomaly Detection
Hao Fu
A. Veldanda
Prashanth Krishnamurthy
S. Garg
Farshad Khorrami
AAML
74
14
0
04 Nov 2020
MAD-VAE: Manifold Awareness Defense Variational Autoencoder
MAD-VAE: Manifold Awareness Defense Variational Autoencoder
Frederick Morlock
Dingsu Wang
AAMLDRL
48
2
0
31 Oct 2020
Integer Programming-based Error-Correcting Output Code Design for Robust
  Classification
Integer Programming-based Error-Correcting Output Code Design for Robust Classification
Samarth Gupta
Saurabh Amin
21
4
0
30 Oct 2020
Machine Learning (In) Security: A Stream of Problems
Machine Learning (In) Security: A Stream of Problems
Fabrício Ceschin
Marcus Botacin
Albert Bifet
Bernhard Pfahringer
Luiz Eduardo Soares de Oliveira
Heitor Murilo Gomes
André Grégio
AAML
96
31
0
30 Oct 2020
Most ReLU Networks Suffer from $\ell^2$ Adversarial Perturbations
Most ReLU Networks Suffer from ℓ2\ell^2ℓ2 Adversarial Perturbations
Amit Daniely
Hadas Shacham
MLT
61
16
0
28 Oct 2020
FaceLeaks: Inference Attacks against Transfer Learning Models via
  Black-box Queries
FaceLeaks: Inference Attacks against Transfer Learning Models via Black-box Queries
Seng Pei Liew
Tsubasa Takahashi
MIACVFedML
73
9
0
27 Oct 2020
Attack Agnostic Adversarial Defense via Visual Imperceptible Bound
Attack Agnostic Adversarial Defense via Visual Imperceptible Bound
S. Chhabra
Akshay Agarwal
Richa Singh
Mayank Vatsa
AAML
66
3
0
25 Oct 2020
Are Adversarial Examples Created Equal? A Learnable Weighted Minimax
  Risk for Robustness under Non-uniform Attacks
Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks
Huimin Zeng
Chen Zhu
Tom Goldstein
Furong Huang
AAML
72
18
0
24 Oct 2020
ATRO: Adversarial Training with a Rejection Option
ATRO: Adversarial Training with a Rejection Option
Masahiro Kato
Zhenghang Cui
Yoshihiro Fukuhara
AAML
49
11
0
24 Oct 2020
Stop Bugging Me! Evading Modern-Day Wiretapping Using Adversarial
  Perturbations
Stop Bugging Me! Evading Modern-Day Wiretapping Using Adversarial Perturbations
Yael Mathov
Tal Senior
A. Shabtai
Yuval Elovici
61
5
0
24 Oct 2020
Towards Robust Neural Networks via Orthogonal Diversity
Towards Robust Neural Networks via Orthogonal Diversity
Kun Fang
Qinghua Tao
Yingwen Wu
Tao Li
Jia Cai
Feipeng Cai
Xiaolin Huang
Jie Yang
AAML
101
8
0
23 Oct 2020
Adversarial Robustness of Supervised Sparse Coding
Adversarial Robustness of Supervised Sparse Coding
Jeremias Sulam
Ramchandran Muthumukar
R. Arora
AAML
72
23
0
22 Oct 2020
Enabling certification of verification-agnostic networks via
  memory-efficient semidefinite programming
Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming
Sumanth Dathathri
Krishnamurthy Dvijotham
Alexey Kurakin
Aditi Raghunathan
J. Uesato
...
Shreya Shankar
Jacob Steinhardt
Ian Goodfellow
Percy Liang
Pushmeet Kohli
AAML
107
95
0
22 Oct 2020
An Efficient Adversarial Attack for Tree Ensembles
An Efficient Adversarial Attack for Tree Ensembles
Chong Zhang
Huan Zhang
Cho-Jui Hsieh
AAML
43
23
0
22 Oct 2020
Class-Conditional Defense GAN Against End-to-End Speech Attacks
Class-Conditional Defense GAN Against End-to-End Speech Attacks
Mohammad Esmaeilpour
P. Cardinal
Alessandro Lameiras Koerich
AAML
46
14
0
22 Oct 2020
Learning Black-Box Attackers with Transferable Priors and Query Feedback
Learning Black-Box Attackers with Transferable Priors and Query Feedback
Jiancheng Yang
Yangzhou Jiang
Xiaoyang Huang
Bingbing Ni
Chenglong Zhao
AAML
135
82
0
21 Oct 2020
Towards Understanding the Dynamics of the First-Order Adversaries
Towards Understanding the Dynamics of the First-Order Adversaries
Zhun Deng
Hangfeng He
Jiaoyang Huang
Weijie J. Su
AAML
54
11
0
20 Oct 2020
Ulixes: Facial Recognition Privacy with Adversarial Machine Learning
Ulixes: Facial Recognition Privacy with Adversarial Machine Learning
Thomas Cilloni
Wei Wang
Charles Walter
Charles Fleming
PICVAAML
39
8
0
20 Oct 2020
Robust Neural Networks inspired by Strong Stability Preserving
  Runge-Kutta methods
Robust Neural Networks inspired by Strong Stability Preserving Runge-Kutta methods
Byungjoo Kim
Bryce Chudomelka
Jinyoung Park
Jaewoo Kang
Youngjoon Hong
Hyunwoo J. Kim
AAML
54
6
0
20 Oct 2020
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
363
707
0
19 Oct 2020
Optimism in the Face of Adversity: Understanding and Improving Deep
  Learning through Adversarial Robustness
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
121
48
0
19 Oct 2020
Taking Over the Stock Market: Adversarial Perturbations Against
  Algorithmic Traders
Taking Over the Stock Market: Adversarial Perturbations Against Algorithmic Traders
Elior Nehemya
Yael Mathov
A. Shabtai
Yuval Elovici
AIFinAAML
25
4
0
19 Oct 2020
FADER: Fast Adversarial Example Rejection
FADER: Fast Adversarial Example Rejection
Francesco Crecchi
Marco Melis
Angelo Sotgiu
D. Bacciu
Battista Biggio
AAML
59
15
0
18 Oct 2020
Weight-Covariance Alignment for Adversarially Robust Neural Networks
Weight-Covariance Alignment for Adversarially Robust Neural Networks
Panagiotis Eustratiadis
Henry Gouk
Da Li
Timothy M. Hospedales
OODAAML
86
23
0
17 Oct 2020
A Generative Model based Adversarial Security of Deep Learning and
  Linear Classifier Models
A Generative Model based Adversarial Security of Deep Learning and Linear Classifier Models
Ferhat Ozgur Catak
Samed Sivaslioglu
Kevser Sahinbas
AAML
63
7
0
17 Oct 2020
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
  and Learning
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning
Hongjun Wang
Guanbin Li
Xiaobai Liu
Liang Lin
GANAAML
95
23
0
15 Oct 2020
Adversarial Images through Stega Glasses
Adversarial Images through Stega Glasses
Benoît Bonnet
Teddy Furon
Patrick Bas
GANAAML
15
1
0
15 Oct 2020
Linking average- and worst-case perturbation robustness via class
  selectivity and dimensionality
Linking average- and worst-case perturbation robustness via class selectivity and dimensionality
Matthew L. Leavitt
Ari S. Morcos
AAML
62
2
0
14 Oct 2020
Learning to Attack with Fewer Pixels: A Probabilistic Post-hoc Framework
  for Refining Arbitrary Dense Adversarial Attacks
Learning to Attack with Fewer Pixels: A Probabilistic Post-hoc Framework for Refining Arbitrary Dense Adversarial Attacks
He Zhao
Thanh-Tuan Nguyen
Trung Le
Paul Montague
O. Vel
Tamas Abraham
Dinh Q. Phung
AAML
52
2
0
13 Oct 2020
Improve Adversarial Robustness via Weight Penalization on Classification
  Layer
Improve Adversarial Robustness via Weight Penalization on Classification Layer
Cong Xu
Dan Li
Min Yang
AAML
24
4
0
08 Oct 2020
Uncovering the Limits of Adversarial Training against Norm-Bounded
  Adversarial Examples
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Sven Gowal
Chongli Qin
J. Uesato
Timothy A. Mann
Pushmeet Kohli
AAML
75
331
0
07 Oct 2020
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine
  Learning Models
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models
A. Salem
Yannick Sautter
Michael Backes
Mathias Humbert
Yang Zhang
AAMLSILMAI4CE
59
40
0
06 Oct 2020
Poison Attacks against Text Datasets with Conditional Adversarially
  Regularized Autoencoder
Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder
Alvin Chan
Yi Tay
Yew-Soon Ong
Aston Zhang
SILM
78
57
0
06 Oct 2020
Constraining Logits by Bounded Function for Adversarial Robustness
Constraining Logits by Bounded Function for Adversarial Robustness
Sekitoshi Kanai
Masanori Yamada
Shin'ya Yamaguchi
Hiroshi Takahashi
Yasutoshi Ida
AAML
33
4
0
06 Oct 2020
A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit
  Neural Network Inference
A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
Sanghyun Hong
Yigitcan Kaya
Ionut-Vlad Modoranu
Tudor Dumitras
AAML
83
73
0
06 Oct 2020
Understanding Catastrophic Overfitting in Single-step Adversarial
  Training
Understanding Catastrophic Overfitting in Single-step Adversarial Training
Hoki Kim
Woojin Lee
Jaewook Lee
AAML
134
112
0
05 Oct 2020
Previous
123...222324...373839
Next