ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07263
  4. Cited By
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods

Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods

20 May 2017
Nicholas Carlini
D. Wagner
    AAML
ArXivPDFHTML

Papers citing "Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods"

50 / 349 papers shown
Title
Metamorphic Detection of Adversarial Examples in Deep Learning Models
  With Affine Transformations
Metamorphic Detection of Adversarial Examples in Deep Learning Models With Affine Transformations
R. Mekala
Gudjon Magnusson
Adam A. Porter
Mikael Lindvall
Madeline Diep
AAML
6
16
0
10 Jul 2019
Detecting and Diagnosing Adversarial Images with Class-Conditional
  Capsule Reconstructions
Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions
Yao Qin
Nicholas Frosst
S. Sabour
Colin Raffel
G. Cottrell
Geoffrey E. Hinton
GAN
AAML
19
71
0
05 Jul 2019
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary
  Attack
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
Francesco Croce
Matthias Hein
AAML
43
474
0
03 Jul 2019
Adversarial Examples to Fool Iris Recognition Systems
Adversarial Examples to Fool Iris Recognition Systems
Sobhan Soleymani
Ali Dabouei
J. Dawson
Nasser M. Nasrabadi
GAN
AAML
16
16
0
21 Jun 2019
Machine Learning Testing: Survey, Landscapes and Horizons
Machine Learning Testing: Survey, Landscapes and Horizons
Jie M. Zhang
Mark Harman
Lei Ma
Yang Liu
VLM
AILaw
39
739
0
19 Jun 2019
Robust or Private? Adversarial Training Makes Models More Vulnerable to
  Privacy Attacks
Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks
Felipe A. Mejia
Paul Gamble
Z. Hampel-Arias
M. Lomnitz
Nina Lopatina
Lucas Tindall
M. Barrios
SILM
27
18
0
15 Jun 2019
A Computationally Efficient Method for Defending Adversarial Deep
  Learning Attacks
A Computationally Efficient Method for Defending Adversarial Deep Learning Attacks
R. Sahay
Rehana Mahfuz
Aly El Gamal
AAML
17
5
0
13 Jun 2019
Provably Robust Deep Learning via Adversarially Trained Smoothed
  Classifiers
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
Hadi Salman
Greg Yang
Jungshian Li
Pengchuan Zhang
Huan Zhang
Ilya P. Razenshteyn
Sébastien Bubeck
AAML
33
536
0
09 Jun 2019
ML-LOO: Detecting Adversarial Examples with Feature Attribution
ML-LOO: Detecting Adversarial Examples with Feature Attribution
Puyudi Yang
Jianbo Chen
Cho-Jui Hsieh
Jane-ling Wang
Michael I. Jordan
AAML
22
101
0
08 Jun 2019
Enhancing Gradient-based Attacks with Symbolic Intervals
Enhancing Gradient-based Attacks with Symbolic Intervals
Shiqi Wang
Yizheng Chen
Ahmed Abdou
Suman Jana
AAML
28
15
0
05 Jun 2019
Bypassing Backdoor Detection Algorithms in Deep Learning
Bypassing Backdoor Detection Algorithms in Deep Learning
T. Tan
Reza Shokri
FedML
AAML
39
149
0
31 May 2019
Scaleable input gradient regularization for adversarial robustness
Scaleable input gradient regularization for adversarial robustness
Chris Finlay
Adam M. Oberman
AAML
16
77
0
27 May 2019
Taking Care of The Discretization Problem: A Comprehensive Study of the
  Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer
  Domain
Taking Care of The Discretization Problem: A Comprehensive Study of the Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer Domain
Lei Bu
Yuchao Duan
Fu Song
Zhe Zhao
AAML
32
18
0
19 May 2019
What Do Adversarially Robust Models Look At?
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
19
5
0
19 May 2019
AI Enabling Technologies: A Survey
AI Enabling Technologies: A Survey
V. Gadepally
Justin A. Goodwin
J. Kepner
Albert Reuther
Hayley Reynolds
S. Samsi
Jonathan Su
David Martinez
27
24
0
08 May 2019
Test Selection for Deep Learning Systems
Test Selection for Deep Learning Systems
Wei Ma
Mike Papadakis
Anestis Tsakmalis
Maxime Cordy
Yves Le Traon
OOD
21
92
0
30 Apr 2019
Adversarial Learning in Statistical Classification: A Comprehensive
  Review of Defenses Against Attacks
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
David J. Miller
Zhen Xiang
G. Kesidis
AAML
19
35
0
12 Apr 2019
A Target-Agnostic Attack on Deep Models: Exploiting Security
  Vulnerabilities of Transfer Learning
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
Shahbaz Rezaei
Xin Liu
SILM
AAML
31
46
0
08 Apr 2019
Interpreting Adversarial Examples by Activation Promotion and
  Suppression
Interpreting Adversarial Examples by Activation Promotion and Suppression
Kaidi Xu
Sijia Liu
Gaoyuan Zhang
Mengshu Sun
Pu Zhao
Quanfu Fan
Chuang Gan
X. Lin
AAML
FAtt
24
43
0
03 Apr 2019
Scaling up the randomized gradient-free adversarial attack reveals
  overestimation of robustness using established attacks
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
Francesco Croce
Jonas Rauber
Matthias Hein
AAML
20
30
0
27 Mar 2019
Detecting Overfitting via Adversarial Examples
Detecting Overfitting via Adversarial Examples
Roman Werpachowski
András Gyorgy
Csaba Szepesvári
TDI
26
45
0
06 Mar 2019
Attacking Graph-based Classification via Manipulating the Graph
  Structure
Attacking Graph-based Classification via Manipulating the Graph Structure
Binghui Wang
Neil Zhenqiang Gong
AAML
24
152
0
01 Mar 2019
Quantifying Perceptual Distortion of Adversarial Examples
Quantifying Perceptual Distortion of Adversarial Examples
Matt Jordan
N. Manoj
Surbhi Goel
A. Dimakis
19
39
0
21 Feb 2019
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
Kevin Roth
Yannic Kilcher
Thomas Hofmann
AAML
27
175
0
13 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
17
1,992
0
08 Feb 2019
A New Family of Neural Networks Provably Resistant to Adversarial
  Attacks
A New Family of Neural Networks Provably Resistant to Adversarial Attacks
Rakshit Agrawal
Luca de Alfaro
D. Helmbold
AAML
OOD
27
2
0
01 Feb 2019
Adversarial Examples Are a Natural Consequence of Test Error in Noise
Adversarial Examples Are a Natural Consequence of Test Error in Noise
Nic Ford
Justin Gilmer
Nicholas Carlini
E. D. Cubuk
AAML
27
318
0
29 Jan 2019
Using Pre-Training Can Improve Model Robustness and Uncertainty
Using Pre-Training Can Improve Model Robustness and Uncertainty
Dan Hendrycks
Kimin Lee
Mantas Mazeika
NoLa
25
719
0
28 Jan 2019
Improving Adversarial Robustness via Promoting Ensemble Diversity
Improving Adversarial Robustness via Promoting Ensemble Diversity
Tianyu Pang
Kun Xu
Chao Du
Ning Chen
Jun Zhu
AAML
41
434
0
25 Jan 2019
Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud
  Classifiers
Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud Classifiers
Daniel Liu
Ronald Yu
Hao Su
3DPC
34
165
0
10 Jan 2019
Adversarial Examples Versus Cloud-based Detectors: A Black-box Empirical
  Study
Adversarial Examples Versus Cloud-based Detectors: A Black-box Empirical Study
Xurong Li
S. Ji
Men Han
Juntao Ji
Zhenyu Ren
Yushan Liu
Chunming Wu
AAML
21
31
0
04 Jan 2019
A Multiversion Programming Inspired Approach to Detecting Audio
  Adversarial Examples
A Multiversion Programming Inspired Approach to Detecting Audio Adversarial Examples
Qiang Zeng
Jianhai Su
Chenglong Fu
Golam Kayas
Lannan Luo
AAML
19
46
0
26 Dec 2018
Why ReLU networks yield high-confidence predictions far away from the
  training data and how to mitigate the problem
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
Matthias Hein
Maksym Andriushchenko
Julian Bitterwolf
OODD
40
552
0
13 Dec 2018
Learning Transferable Adversarial Examples via Ghost Networks
Learning Transferable Adversarial Examples via Ghost Networks
Yingwei Li
S. Bai
Yuyin Zhou
Cihang Xie
Zhishuai Zhang
Alan Yuille
AAML
39
136
0
09 Dec 2018
Random Spiking and Systematic Evaluation of Defenses Against Adversarial
  Examples
Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples
Huangyi Ge
Sze Yiu Chau
Bruno Ribeiro
Ninghui Li
AAML
27
1
0
05 Dec 2018
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAML
AI4CE
22
169
0
03 Dec 2018
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
168
287
0
02 Dec 2018
A randomized gradient-free attack on ReLU networks
A randomized gradient-free attack on ReLU networks
Francesco Croce
Matthias Hein
AAML
37
21
0
28 Nov 2018
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning
K. Makarychev
Pascal Dupré
Yury Makarychev
Giancarlo Pellegrino
Dan Boneh
AAML
29
64
0
08 Nov 2018
MixTrain: Scalable Training of Verifiably Robust Neural Networks
MixTrain: Scalable Training of Verifiably Robust Neural Networks
Yue Zhang
Yizheng Chen
Ahmed Abdou
Mohsen Guizani
AAML
21
23
0
06 Nov 2018
On Extensions of CLEVER: A Neural Network Robustness Evaluation
  Algorithm
On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm
Tsui-Wei Weng
Huan Zhang
Pin-Yu Chen
A. Lozano
Cho-Jui Hsieh
Luca Daniel
28
10
0
19 Oct 2018
Characterizing Adversarial Examples Based on Spatial Consistency
  Information for Semantic Segmentation
Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation
Chaowei Xiao
Ruizhi Deng
Bo-wen Li
Feng Yu
M. Liu
D. Song
AAML
16
99
0
11 Oct 2018
What made you do this? Understanding black-box decisions with sufficient
  input subsets
What made you do this? Understanding black-box decisions with sufficient input subsets
Brandon Carter
Jonas W. Mueller
Siddhartha Jain
David K Gifford
FAtt
37
77
0
09 Oct 2018
Detecting DGA domains with recurrent neural networks and side
  information
Detecting DGA domains with recurrent neural networks and side information
Ryan R. Curtin
Andrew B. Gardner
Slawomir Grzonkowski
A. Kleymenov
Alejandro Mosquera
AAML
6
64
0
04 Oct 2018
Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural
  Network
Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network
Xuanqing Liu
Yao Li
Chongruo Wu
Cho-Jui Hsieh
AAML
OOD
24
171
0
01 Oct 2018
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep
  Convolutional Networks
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks
Kenneth T. Co
Luis Muñoz-González
Sixte de Maupeou
Emil C. Lupu
AAML
22
67
0
30 Sep 2018
On The Utility of Conditional Generation Based Mutual Information for
  Characterizing Adversarial Subspaces
On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces
Chia-Yi Hsu
Pei-Hsuan Lu
Pin-Yu Chen
Chia-Mu Yu
AAML
30
1
0
24 Sep 2018
Query-Efficient Black-Box Attack by Active Learning
Query-Efficient Black-Box Attack by Active Learning
Pengcheng Li
Jinfeng Yi
Lijun Zhang
AAML
MLAU
21
54
0
13 Sep 2018
Training for Faster Adversarial Robustness Verification via Inducing
  ReLU Stability
Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability
Kai Y. Xiao
Vincent Tjeng
Nur Muhammad (Mahi) Shafiullah
A. Madry
AAML
OOD
12
199
0
09 Sep 2018
Why Do Adversarial Attacks Transfer? Explaining Transferability of
  Evasion and Poisoning Attacks
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Ambra Demontis
Marco Melis
Maura Pintor
Matthew Jagielski
Battista Biggio
Alina Oprea
Cristina Nita-Rotaru
Fabio Roli
SILM
AAML
19
11
0
08 Sep 2018
Previous
1234567
Next