Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1705.07263
Cited By
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
20 May 2017
Nicholas Carlini
D. Wagner
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods"
50 / 349 papers shown
Title
"What's in the box?!": Deflecting Adversarial Attacks by Randomly Deploying Adversarially-Disjoint Models
Sahar Abdelnabi
Mario Fritz
AAML
27
7
0
09 Feb 2021
Adversarial Attacks and Defenses in Physiological Computing: A Systematic Review
Dongrui Wu
Jiaxin Xu
Weili Fang
Yi Zhang
Liuqing Yang
Xiaodong Xu
Hanbin Luo
Xiang Yu
AAML
27
25
0
04 Feb 2021
Increasing the Confidence of Deep Neural Networks by Coverage Analysis
Giulio Rossolini
Alessandro Biondi
Giorgio Buttazzo
AAML
26
13
0
28 Jan 2021
Error Diffusion Halftoning Against Adversarial Examples
Shao-Yuan Lo
Vishal M. Patel
DiffM
10
4
0
23 Jan 2021
Fundamental Tradeoffs in Distributionally Adversarial Training
M. Mehrabi
Adel Javanmard
Ryan A. Rossi
Anup B. Rao
Tung Mai
AAML
20
17
0
15 Jan 2021
The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
Andreas Bär
Jonas Löhdefink
Nikhil Kapoor
Serin Varghese
Fabian Hüger
Peter Schlicht
Tim Fingscheidt
AAML
108
33
0
11 Jan 2021
Demystifying Deep Neural Networks Through Interpretation: A Survey
Giang Dao
Minwoo Lee
FaML
FAtt
16
1
0
13 Dec 2020
Omni: Automated Ensemble with Unexpected Models against Adversarial Evasion Attack
Rui Shu
Tianpei Xia
Laurie A. Williams
Tim Menzies
AAML
32
15
0
23 Nov 2020
Adversarial Classification: Necessary conditions and geometric flows
Nicolas García Trillos
Ryan W. Murray
AAML
34
19
0
21 Nov 2020
The Vulnerability of the Neural Networks Against Adversarial Examples in Deep Learning Algorithms
Rui Zhao
AAML
34
1
0
02 Nov 2020
GreedyFool: Distortion-Aware Sparse Adversarial Attack
Xiaoyi Dong
Dongdong Chen
Jianmin Bao
Chuan Qin
Lu Yuan
Weiming Zhang
Nenghai Yu
Dong Chen
AAML
18
63
0
26 Oct 2020
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
29
48
0
19 Oct 2020
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Sven Gowal
Chongli Qin
J. Uesato
Timothy A. Mann
Pushmeet Kohli
AAML
17
324
0
07 Oct 2020
Query complexity of adversarial attacks
Grzegorz Gluch
R. Urbanke
AAML
17
5
0
02 Oct 2020
An Empirical Study of DNNs Robustification Inefficacy in Protecting Visual Recommenders
Vito Walter Anelli
Tommaso Di Noia
Daniele Malitesta
Felice Antonio Merra
AAML
19
2
0
02 Oct 2020
Block-wise Image Transformation with Secret Key for Adversarially Robust Defense
Maungmaung Aprilpyone
Hitoshi Kiya
29
57
0
02 Oct 2020
Optimal Provable Robustness of Quantum Classification via Quantum Hypothesis Testing
Maurice Weber
Nana Liu
Bo-wen Li
Ce Zhang
Zhikuan Zhao
AAML
37
28
0
21 Sep 2020
Certifying Confidence via Randomized Smoothing
Aounon Kumar
Alexander Levine
S. Feizi
Tom Goldstein
UQCV
33
39
0
17 Sep 2020
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective
G. R. Machado
Eugênio Silva
R. Goldschmidt
AAML
33
156
0
08 Sep 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
Yifan Jiang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
19
215
0
04 Sep 2020
Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings
John Mitros
A. Pakrashi
Brian Mac Namee
UQCV
26
2
0
03 Sep 2020
PermuteAttack: Counterfactual Explanation of Machine Learning Credit Scorecards
Masoud Hashemi
Ali Fathi
AAML
8
32
0
24 Aug 2020
Defending Adversarial Examples via DNN Bottleneck Reinforcement
Wenqing Liu
Miaojing Shi
Teddy Furon
Li Li
AAML
21
8
0
12 Aug 2020
Optimizing Information Loss Towards Robust Neural Networks
Philip Sperl
Konstantin Böttinger
AAML
18
3
0
07 Aug 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
25
73
0
07 Aug 2020
Robust Machine Learning via Privacy/Rate-Distortion Theory
Ye Wang
Shuchin Aeron
Adnan Siraj Rakin
T. Koike-Akino
P. Moulin
OOD
19
6
0
22 Jul 2020
Towards Visual Distortion in Black-Box Attacks
Nannan Li
Zhenzhong Chen
19
12
0
21 Jul 2020
Robust Image Classification Using A Low-Pass Activation Function and DCT Augmentation
Md Tahmid Hossain
S. Teng
Ferdous Sohel
Guojun Lu
16
10
0
18 Jul 2020
Do Adversarially Robust ImageNet Models Transfer Better?
Hadi Salman
Andrew Ilyas
Logan Engstrom
Ashish Kapoor
A. Madry
37
417
0
16 Jul 2020
Adversarial Attacks against Neural Networks in Audio Domain: Exploiting Principal Components
Ken Alparslan
Yigit Can Alparslan
Matthew Burlick
AAML
16
8
0
14 Jul 2020
Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples
S. Goldwasser
Adam Tauman Kalai
Y. Kalai
Omar Montasser
AAML
14
38
0
10 Jul 2020
How benign is benign overfitting?
Amartya Sanyal
P. Dokania
Varun Kanade
Philip Torr
NoLa
AAML
23
57
0
08 Jul 2020
Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment
Xabier Echeberria-Barrio
Amaia Gil-Lerchundi
Ines Goicoechea-Telleria
Raul Orduna Urrutia
AAML
6
5
0
02 Jul 2020
Improving Calibration through the Relationship with Adversarial Robustness
Yao Qin
Xuezhi Wang
Alex Beutel
Ed H. Chi
AAML
40
25
0
29 Jun 2020
Beware the Black-Box: on the Robustness of Recent Defenses to Adversarial Examples
Kaleel Mahmood
Deniz Gurevin
Marten van Dijk
Phuong Ha Nguyen
AAML
12
22
0
18 Jun 2020
Adversarial Classification via Distributional Robustness with Wasserstein Ambiguity
Nam Ho-Nguyen
Stephen J. Wright
OOD
48
16
0
28 May 2020
Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks
Hokuto Hirano
K. Koga
Kazuhiro Takemoto
AAML
19
47
0
22 May 2020
PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking
Chong Xiang
A. Bhagoji
Vikash Sehwag
Prateek Mittal
AAML
30
29
0
17 May 2020
Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder
Guanlin Li
Shuya Ding
Jun Luo
Chang-rui Liu
AAML
50
19
0
06 May 2020
Adversarial Training against Location-Optimized Adversarial Patches
Sukrut Rao
David Stutz
Bernt Schiele
AAML
19
91
0
05 May 2020
Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of Uncertainty
Xiyue Zhang
Xiaofei Xie
Lei Ma
Xiaoning Du
Q. Hu
Yang Liu
Jianjun Zhao
Meng Sun
AAML
16
76
0
24 Apr 2020
Advanced Evasion Attacks and Mitigations on Practical ML-Based Phishing Website Classifiers
Yusi Lei
Sen Chen
Lingling Fan
Fu Song
Yang Liu
AAML
33
50
0
15 Apr 2020
Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
Michael Everett
Bjorn Lutjens
Jonathan P. How
AAML
13
41
0
11 Apr 2020
DaST: Data-free Substitute Training for Adversarial Attacks
Mingyi Zhou
Jing Wu
Yipeng Liu
Shuaicheng Liu
Ce Zhu
22
142
0
28 Mar 2020
Systematic Evaluation of Privacy Risks of Machine Learning Models
Liwei Song
Prateek Mittal
MIACV
196
358
0
24 Mar 2020
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
75
98
0
20 Mar 2020
Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates
Amin Ghiasi
Ali Shafahi
Tom Goldstein
33
55
0
19 Mar 2020
Anomalous Example Detection in Deep Learning: A Survey
Saikiran Bulusu
B. Kailkhura
Bo-wen Li
P. Varshney
D. Song
AAML
28
47
0
16 Mar 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OOD
AAML
63
63
0
02 Mar 2020
Overfitting in adversarially robust deep learning
Leslie Rice
Eric Wong
Zico Kolter
47
785
0
26 Feb 2020
Previous
1
2
3
4
5
6
7
Next