ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.04701
  4. Cited By
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong

Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong

15 June 2017
Warren He
James Wei
Xinyun Chen
Nicholas Carlini
D. Song
    AAML
ArXivPDFHTML

Papers citing "Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong"

50 / 127 papers shown
Title
Certifiable Robustness to Adversarial State Uncertainty in Deep
  Reinforcement Learning
Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
Michael Everett
Bjorn Lutjens
Jonathan P. How
AAML
13
41
0
11 Apr 2020
Towards Achieving Adversarial Robustness by Enforcing Feature
  Consistency Across Bit Planes
Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes
Sravanti Addepalli
S. VivekB.
Arya Baburaj
Gaurang Sriramanan
R. Venkatesh Babu
AAML
6
32
0
01 Apr 2020
Denoised Smoothing: A Provable Defense for Pretrained Classifiers
Denoised Smoothing: A Provable Defense for Pretrained Classifiers
Hadi Salman
Mingjie Sun
Greg Yang
Ashish Kapoor
J. Zico Kolter
37
23
0
04 Mar 2020
Revisiting Ensembles in an Adversarial Context: Improving Natural
  Accuracy
Revisiting Ensembles in an Adversarial Context: Improving Natural Accuracy
Aditya Saligrama
Guillaume Leclerc
AAML
11
1
0
26 Feb 2020
Adversarial Ranking Attack and Defense
Adversarial Ranking Attack and Defense
Mo Zhou
Zhenxing Niu
Le Wang
Qilin Zhang
G. Hua
36
38
0
26 Feb 2020
Towards Backdoor Attacks and Defense in Robust Machine Learning Models
Towards Backdoor Attacks and Defense in Robust Machine Learning Models
E. Soremekun
Sakshi Udeshi
Sudipta Chattopadhyay
AAML
22
13
0
25 Feb 2020
Robustness from Simple Classifiers
Robustness from Simple Classifiers
Sharon Qian
Dimitris Kalimeris
Gal Kaplun
Yaron Singer
AAML
8
1
0
21 Feb 2020
More Data Can Expand the Generalization Gap Between Adversarially Robust
  and Standard Models
More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models
Lin Chen
Yifei Min
Mingrui Zhang
Amin Karbasi
OOD
38
64
0
11 Feb 2020
An Analysis of Adversarial Attacks and Defenses on Autonomous Driving
  Models
An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models
Yao Deng
Xi Zheng
Tianyi Zhang
Chen Chen
Guannan Lou
Miryung Kim
AAML
8
141
0
06 Feb 2020
Secure and Robust Machine Learning for Healthcare: A Survey
Secure and Robust Machine Learning for Healthcare: A Survey
A. Qayyum
Junaid Qadir
Muhammad Bilal
Ala I. Al-Fuqaha
AAML
OOD
45
375
0
21 Jan 2020
ATHENA: A Framework based on Diverse Weak Defenses for Building
  Adversarial Defense
ATHENA: A Framework based on Diverse Weak Defenses for Building Adversarial Defense
Meng
Jianhai Su
Jason M. O'Kane
Pooyan Jamshidi
AAML
9
7
0
02 Jan 2020
Exploiting the Sensitivity of $L_2$ Adversarial Examples to
  Erase-and-Restore
Exploiting the Sensitivity of L2L_2L2​ Adversarial Examples to Erase-and-Restore
F. Zuo
Qiang Zeng
AAML
10
1
0
01 Jan 2020
Exploring the Back Alleys: Analysing The Robustness of Alternative
  Neural Network Architectures against Adversarial Attacks
Exploring the Back Alleys: Analysing The Robustness of Alternative Neural Network Architectures against Adversarial Attacks
Y. Tan
Yuval Elovici
Alexander Binder
AAML
11
3
0
08 Dec 2019
Towards Large yet Imperceptible Adversarial Image Perturbations with
  Perceptual Color Distance
Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance
Zhengyu Zhao
Zhuoran Liu
Martha Larson
AAML
18
142
0
06 Nov 2019
Enhancing Certifiable Robustness via a Deep Model Ensemble
Enhancing Certifiable Robustness via a Deep Model Ensemble
Huan Zhang
Minhao Cheng
Cho-Jui Hsieh
33
9
0
31 Oct 2019
Certified Adversarial Robustness for Deep Reinforcement Learning
Certified Adversarial Robustness for Deep Reinforcement Learning
Björn Lütjens
Michael Everett
Jonathan P. How
AAML
6
91
0
28 Oct 2019
A Useful Taxonomy for Adversarial Robustness of Neural Networks
A Useful Taxonomy for Adversarial Robustness of Neural Networks
L. Smith
AAML
28
6
0
23 Oct 2019
Defending Against Physically Realizable Attacks on Image Classification
Defending Against Physically Realizable Attacks on Image Classification
Tong Wu
Liang Tong
Yevgeniy Vorobeychik
AAML
19
125
0
20 Sep 2019
Understanding Adversarial Attacks on Deep Learning Based Medical Image
  Analysis Systems
Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems
Xingjun Ma
Yuhao Niu
Lin Gu
Yisen Wang
Yitian Zhao
James Bailey
Feng Lu
MedIm
AAML
22
444
0
24 Jul 2019
Towards Stable and Efficient Training of Verifiably Robust Neural
  Networks
Towards Stable and Efficient Training of Verifiably Robust Neural Networks
Huan Zhang
Hongge Chen
Chaowei Xiao
Sven Gowal
Robert Stanforth
Bo-wen Li
Duane S. Boning
Cho-Jui Hsieh
AAML
17
344
0
14 Jun 2019
Robust Attacks against Multiple Classifiers
Robust Attacks against Multiple Classifiers
Juan C. Perdomo
Yaron Singer
AAML
13
10
0
06 Jun 2019
Enhancing Gradient-based Attacks with Symbolic Intervals
Enhancing Gradient-based Attacks with Symbolic Intervals
Shiqi Wang
Yizheng Chen
Ahmed Abdou
Suman Jana
AAML
28
15
0
05 Jun 2019
Improving the Robustness of Deep Neural Networks via Adversarial
  Training with Triplet Loss
Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss
Pengcheng Li
Jinfeng Yi
Bowen Zhou
Lijun Zhang
AAML
29
36
0
28 May 2019
Regula Sub-rosa: Latent Backdoor Attacks on Deep Neural Networks
Regula Sub-rosa: Latent Backdoor Attacks on Deep Neural Networks
Yuanshun Yao
Huiying Li
Haitao Zheng
Ben Y. Zhao
AAML
27
13
0
24 May 2019
Testing DNN Image Classifiers for Confusion & Bias Errors
Testing DNN Image Classifiers for Confusion & Bias Errors
Yuchi Tian
Ziyuan Zhong
Vicente Ordonez
Gail E. Kaiser
Baishakhi Ray
24
52
0
20 May 2019
Better the Devil you Know: An Analysis of Evasion Attacks using
  Out-of-Distribution Adversarial Examples
Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples
Vikash Sehwag
A. Bhagoji
Liwei Song
Chawin Sitawarin
Daniel Cullina
M. Chiang
Prateek Mittal
OODD
24
26
0
05 May 2019
NATTACK: Learning the Distributions of Adversarial Examples for an
  Improved Black-Box Attack on Deep Neural Networks
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
Yandong Li
Lijun Li
Liqiang Wang
Tong Zhang
Boqing Gong
AAML
18
245
0
01 May 2019
Gotta Catch Ém All: Using Honeypots to Catch Adversarial Attacks on
  Neural Networks
Gotta Catch Ém All: Using Honeypots to Catch Adversarial Attacks on Neural Networks
Shawn Shan
Emily Wenger
Bolun Wang
Yangqiu Song
Haitao Zheng
Ben Y. Zhao
23
71
0
18 Apr 2019
Adversarial Reinforcement Learning under Partial Observability in
  Autonomous Computer Network Defence
Adversarial Reinforcement Learning under Partial Observability in Autonomous Computer Network Defence
Yi Han
David Hubczenko
Paul Montague
O. Vel
Tamas Abraham
Benjamin I. P. Rubinstein
C. Leckie
T. Alpcan
S. Erfani
AAML
11
6
0
25 Feb 2019
On Evaluating Adversarial Robustness
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
A. Madry
Alexey Kurakin
ELM
AAML
20
890
0
18 Feb 2019
AuxBlocks: Defense Adversarial Example via Auxiliary Blocks
AuxBlocks: Defense Adversarial Example via Auxiliary Blocks
Yueyao Yu
Pengfei Yu
Wenye Li
AAML
6
6
0
18 Feb 2019
A Black-box Attack on Neural Networks Based on Swarm Evolutionary
  Algorithm
A Black-box Attack on Neural Networks Based on Swarm Evolutionary Algorithm
Xiaolei Liu
Yuheng Luo
Xiaosong Zhang
Qingxin Zhu
AAML
24
16
0
26 Jan 2019
Theoretically Principled Trade-off between Robustness and Accuracy
Theoretically Principled Trade-off between Robustness and Accuracy
Hongyang R. Zhang
Yaodong Yu
Jiantao Jiao
Eric Xing
L. Ghaoui
Michael I. Jordan
36
2,496
0
24 Jan 2019
Noise Flooding for Detecting Audio Adversarial Examples Against
  Automatic Speech Recognition
Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition
K. Rajaratnam
Jugal Kalita
AAML
14
44
0
25 Dec 2018
Exploiting the Inherent Limitation of L0 Adversarial Examples
Exploiting the Inherent Limitation of L0 Adversarial Examples
F. Zuo
Bokai Yang
Xiaopeng Li
Lannan Luo
Qiang Zeng
AAML
21
1
0
23 Dec 2018
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAML
AI4CE
22
169
0
03 Dec 2018
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning
K. Makarychev
Pascal Dupré
Yury Makarychev
Giancarlo Pellegrino
Dan Boneh
AAML
29
64
0
08 Nov 2018
MixTrain: Scalable Training of Verifiably Robust Neural Networks
MixTrain: Scalable Training of Verifiably Robust Neural Networks
Yue Zhang
Yizheng Chen
Ahmed Abdou
Mohsen Guizani
AAML
21
23
0
06 Nov 2018
Security Matters: A Survey on Adversarial Machine Learning
Security Matters: A Survey on Adversarial Machine Learning
Guofu Li
Pengjia Zhu
Jin Li
Zhemin Yang
Ning Cao
Zhiyi Chen
AAML
26
24
0
16 Oct 2018
Characterizing Adversarial Examples Based on Spatial Consistency
  Information for Semantic Segmentation
Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation
Chaowei Xiao
Ruizhi Deng
Bo-wen Li
Feng Yu
M. Liu
D. Song
AAML
16
99
0
11 Oct 2018
Secure Deep Learning Engineering: A Software Quality Assurance
  Perspective
Secure Deep Learning Engineering: A Software Quality Assurance Perspective
Lei Ma
Felix Juefei Xu
Minhui Xue
Q. Hu
Sen Chen
Bo-wen Li
Yang Liu
Jianjun Zhao
Jianxiong Yin
Simon See
AAML
27
35
0
10 Oct 2018
Is PGD-Adversarial Training Necessary? Alternative Training via a Soft-Quantization Network with Noisy-Natural Samples Only
T. Zheng
Changyou Chen
K. Ren
AAML
20
6
0
10 Oct 2018
WAIC, but Why? Generative Ensembles for Robust Anomaly Detection
WAIC, but Why? Generative Ensembles for Robust Anomaly Detection
Hyun-Jae Choi
Eric Jang
Alexander A. Alemi
OODD
20
82
0
02 Oct 2018
Adversarial Examples - A Complete Characterisation of the Phenomenon
Adversarial Examples - A Complete Characterisation of the Phenomenon
A. Serban
E. Poll
Joost Visser
SILM
AAML
30
49
0
02 Oct 2018
Characterizing Audio Adversarial Examples Using Temporal Dependency
Characterizing Audio Adversarial Examples Using Temporal Dependency
Zhuolin Yang
Bo-wen Li
Pin-Yu Chen
D. Song
AAML
6
161
0
28 Sep 2018
Bridging machine learning and cryptography in defence against
  adversarial attacks
Bridging machine learning and cryptography in defence against adversarial attacks
O. Taran
Shideh Rezaeifar
Slava Voloshynovskiy
AAML
15
22
0
05 Sep 2018
Reinforcement Learning for Autonomous Defence in Software-Defined
  Networking
Reinforcement Learning for Autonomous Defence in Software-Defined Networking
Yi Han
Benjamin I. P. Rubinstein
Tamas Abraham
T. Alpcan
O. Vel
S. Erfani
David Hubczenko
C. Leckie
Paul Montague
AAML
16
68
0
17 Aug 2018
Security and Privacy Issues in Deep Learning
Security and Privacy Issues in Deep Learning
Ho Bae
Jaehee Jang
Dahuin Jung
Hyemi Jang
Heonseok Ha
Hyungyu Lee
Sungroh Yoon
SILM
MIACV
45
77
0
31 Jul 2018
Evaluating and Understanding the Robustness of Adversarial Logit Pairing
Evaluating and Understanding the Robustness of Adversarial Logit Pairing
Logan Engstrom
Andrew Ilyas
Anish Athalye
AAML
14
140
0
26 Jul 2018
Gradient Similarity: An Explainable Approach to Detect Adversarial
  Attacks against Deep Learning
Gradient Similarity: An Explainable Approach to Detect Adversarial Attacks against Deep Learning
J. Dhaliwal
S. Shintre
AAML
23
15
0
27 Jun 2018
Previous
123
Next