ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.04644
  4. Cited By
Towards Evaluating the Robustness of Neural Networks
v1v2 (latest)

Towards Evaluating the Robustness of Neural Networks

16 August 2016
Nicholas Carlini
D. Wagner
    OODAAML
ArXiv (abs)PDFHTML

Papers citing "Towards Evaluating the Robustness of Neural Networks"

50 / 4,015 papers shown
Title
PixelDefend: Leveraging Generative Models to Understand and Defend
  against Adversarial Examples
PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples
Yang Song
Taesup Kim
Sebastian Nowozin
Stefano Ermon
Nate Kushman
AAML
147
791
0
30 Oct 2017
Attacking the Madry Defense Model with $L_1$-based Adversarial Examples
Attacking the Madry Defense Model with L1L_1L1​-based Adversarial Examples
Yash Sharma
Pin-Yu Chen
126
118
0
30 Oct 2017
Certifying Some Distributional Robustness with Principled Adversarial
  Training
Certifying Some Distributional Robustness with Principled Adversarial Training
Aman Sinha
Hongseok Namkoong
Riccardo Volpi
John C. Duchi
OOD
149
866
0
29 Oct 2017
One pixel attack for fooling deep neural networks
One pixel attack for fooling deep neural networks
Jiawei Su
Danilo Vasconcellos Vargas
Kouichi Sakurai
AAML
220
2,336
0
24 Oct 2017
Feature-Guided Black-Box Safety Testing of Deep Neural Networks
Feature-Guided Black-Box Safety Testing of Deep Neural Networks
Matthew Wicker
Xiaowei Huang
Marta Kwiatkowska
AAML
83
236
0
21 Oct 2017
Detecting Adversarial Attacks on Neural Network Policies with Visual
  Foresight
Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight
Yen-Chen Lin
Ming-Yuan Liu
Min Sun
Jia-Bin Huang
AAML
104
49
0
02 Oct 2017
DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in
  Neural Networks
DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks
D. Gopinath
Guy Katz
C. Păsăreanu
Clark W. Barrett
AAML
144
87
0
02 Oct 2017
Provably Minimally-Distorted Adversarial Examples
Provably Minimally-Distorted Adversarial Examples
Nicholas Carlini
Guy Katz
Clark W. Barrett
D. Dill
AAML
105
89
0
29 Sep 2017
Fooling Vision and Language Models Despite Localization and Attention
  Mechanism
Fooling Vision and Language Models Despite Localization and Attention Mechanism
Xiaojun Xu
Xinyun Chen
Chang-rui Liu
Anna Rohrbach
Trevor Darrell
Basel Alomair
AAML
106
41
0
25 Sep 2017
Mitigating Evasion Attacks to Deep Neural Networks via Region-based
  Classification
Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
85
212
0
17 Sep 2017
Robustness Analysis of Visual QA Models by Basic Questions
Robustness Analysis of Visual QA Models by Basic Questions
Jia-Hong Huang
Cuong Duc Dao
Modar Alfadly
C. Huck Yang
Guohao Li
OOD
65
24
0
14 Sep 2017
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial
  Examples
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Pin-Yu Chen
Yash Sharma
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
AAML
86
641
0
13 Sep 2017
Ensemble Methods as a Defense to Adversarial Perturbations Against Deep
  Neural Networks
Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks
Thilo Strauss
Markus Hanselmann
Andrej Junginger
Holger Ulmer
AAML
93
137
0
11 Sep 2017
Towards Proving the Adversarial Robustness of Deep Neural Networks
Towards Proving the Adversarial Robustness of Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel J. Kochenderfer
AAMLOOD
112
118
0
08 Sep 2017
DeepFense: Online Accelerated Defense Against Adversarial Deep Learning
DeepFense: Online Accelerated Defense Against Adversarial Deep Learning
B. Rouhani
Mohammad Samragh
Mojan Javaheripi
T. Javidi
F. Koushanfar
AAML
53
15
0
08 Sep 2017
Practical Attacks Against Graph-based Clustering
Practical Attacks Against Graph-based Clustering
Yizheng Chen
Yacin Nadji
Athanasios Kountouras
Fabian Monrose
R. Perdisci
M. Antonakakis
N. Vasiloglou
AAML
63
87
0
29 Aug 2017
DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous
  Cars
DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars
Yuchi Tian
Kexin Pei
Suman Jana
Baishakhi Ray
AAML
99
1,365
0
28 Aug 2017
Improving Robustness of ML Classifiers against Realizable Evasion
  Attacks Using Conserved Features
Improving Robustness of ML Classifiers against Realizable Evasion Attacks Using Conserved Features
Liang Tong
Yue Liu
Chen Hajaj
Chaowei Xiao
Ning Zhang
Yevgeniy Vorobeychik
AAMLOOD
52
88
0
28 Aug 2017
Modular Learning Component Attacks: Today's Reality, Tomorrow's
  Challenge
Modular Learning Component Attacks: Today's Reality, Tomorrow's Challenge
Xinyang Zhang
Yujie Ji
Ting Wang
AAML
39
2
0
25 Aug 2017
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural
  Networks without Training Substitute Models
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Pin-Yu Chen
Huan Zhang
Yash Sharma
Jinfeng Yi
Cho-Jui Hsieh
AAML
127
1,896
0
14 Aug 2017
Cascade Adversarial Machine Learning Regularized with a Unified
  Embedding
Cascade Adversarial Machine Learning Regularized with a Unified Embedding
Taesik Na
J. Ko
Saibal Mukhopadhyay
AAMLGAN
95
102
0
08 Aug 2017
Adversarial-Playground: A Visualization Suite Showing How Adversarial
  Examples Fool Deep Learning
Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning
Andrew P. Norton
Yanjun Qi
AAML
75
47
0
01 Aug 2017
Robust Physical-World Attacks on Deep Learning Models
Robust Physical-World Attacks on Deep Learning Models
Kevin Eykholt
Ivan Evtimov
Earlence Fernandes
Yue Liu
Amir Rahmati
Chaowei Xiao
Atul Prakash
Tadayoshi Kohno
Basel Alomair
AAML
145
595
0
27 Jul 2017
Efficient Defenses Against Adversarial Attacks
Efficient Defenses Against Adversarial Attacks
Valentina Zantedeschi
Maria-Irina Nicolae
Ambrish Rawat
AAML
76
297
0
21 Jul 2017
APE-GAN: Adversarial Perturbation Elimination with GAN
APE-GAN: Adversarial Perturbation Elimination with GAN
Shiwei Shen
Guoqing Jin
Feng Dai
Yongdong Zhang
GAN
122
221
0
18 Jul 2017
A Formal Framework to Characterize Interpretability of Procedures
A Formal Framework to Characterize Interpretability of Procedures
Amit Dhurandhar
Vijay Iyengar
Ronny Luss
Karthikeyan Shanmugam
47
19
0
12 Jul 2017
Adversarial Examples, Uncertainty, and Transfer Testing Robustness in
  Gaussian Process Hybrid Deep Networks
Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks
John Bradshaw
A. G. Matthews
Zoubin Ghahramani
BDLAAML
123
172
0
08 Jul 2017
Efficient Data Representation by Selecting Prototypes with Importance
  Weights
Efficient Data Representation by Selecting Prototypes with Importance Weights
Karthik S. Gurumoorthy
Amit Dhurandhar
Guillermo Cecchi
Charu Aggarwal
122
22
0
05 Jul 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
508
12,186
0
19 Jun 2017
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong
Warren He
James Wei
Xinyun Chen
Nicholas Carlini
Basel Alomair
AAML
122
241
0
15 Jun 2017
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
Yizhen Wang
S. Jha
Kamalika Chaudhuri
AAML
251
155
0
13 Jun 2017
TIP: Typifying the Interpretability of Procedures
TIP: Typifying the Interpretability of Procedures
Amit Dhurandhar
Vijay Iyengar
Ronny Luss
Karthikeyan Shanmugam
95
36
0
09 Jun 2017
Adversarial-Playground: A Visualization Suite for Adversarial Sample
  Generation
Adversarial-Playground: A Visualization Suite for Adversarial Sample Generation
Andrew P. Norton
Yanjun Qi
AAML
26
0
0
06 Jun 2017
Towards Robust Detection of Adversarial Examples
Towards Robust Detection of Adversarial Examples
Tianyu Pang
Chao Du
Yinpeng Dong
Jun Zhu
AAML
87
18
0
02 Jun 2017
Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial
  Examples
Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples
Weilin Xu
David Evans
Yanjun Qi
AAML
68
42
0
30 May 2017
MagNet: a Two-Pronged Defense against Adversarial Examples
MagNet: a Two-Pronged Defense against Adversarial Examples
Dongyu Meng
Hao Chen
AAML
56
1,211
0
25 May 2017
Detecting Adversarial Image Examples in Deep Networks with Adaptive
  Noise Reduction
Detecting Adversarial Image Examples in Deep Networks with Adaptive Noise Reduction
Bin Liang
Hongcheng Li
Miaoqiang Su
Xirong Li
Wenchang Shi
Wenyuan Xu
AAML
139
220
0
23 May 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
142
1,870
0
20 May 2017
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial
  Attacks with Moving Target Defense
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense
Sailik Sengupta
Tathagata Chakraborti
S. Kambhampati
AAML
139
63
0
19 May 2017
Ensemble Adversarial Training: Attacks and Defenses
Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr
Alexey Kurakin
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
217
2,739
0
19 May 2017
DeepXplore: Automated Whitebox Testing of Deep Learning Systems
DeepXplore: Automated Whitebox Testing of Deep Learning Systems
Kexin Pei
Yinzhi Cao
Junfeng Yang
Suman Jana
AAML
142
1,376
0
18 May 2017
Extending Defensive Distillation
Extending Defensive Distillation
Nicolas Papernot
Patrick McDaniel
AAML
91
119
0
15 May 2017
DeepCorrect: Correcting DNN models against Image Distortions
DeepCorrect: Correcting DNN models against Image Distortions
Tejas S. Borkar
Lina Karam
133
93
0
05 May 2017
Deep Text Classification Can be Fooled
Deep Text Classification Can be Fooled
Bin Liang
Hongcheng Li
Miaoqiang Su
Pan Bian
Xirong Li
Wenchang Shi
AAML
85
427
0
26 Apr 2017
Universal Adversarial Perturbations Against Semantic Image Segmentation
Universal Adversarial Perturbations Against Semantic Image Segmentation
J. H. Metzen
Mummadi Chaithanya Kumar
Thomas Brox
Volker Fischer
AAML
181
289
0
19 Apr 2017
Google's Cloud Vision API Is Not Robust To Noise
Google's Cloud Vision API Is Not Robust To Noise
Hossein Hosseini
Baicen Xiao
Radha Poovendran
AAML
77
124
0
16 Apr 2017
Enhancing Robustness of Machine Learning Systems via Data
  Transformations
Enhancing Robustness of Machine Learning Systems via Data Transformations
A. Bhagoji
Daniel Cullina
Chawin Sitawarin
Prateek Mittal
AAML
134
231
0
09 Apr 2017
Feature Squeezing: Detecting Adversarial Examples in Deep Neural
  Networks
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
Weilin Xu
David Evans
Yanjun Qi
AAML
106
1,288
0
04 Apr 2017
SafetyNet: Detecting and Rejecting Adversarial Examples Robustly
SafetyNet: Detecting and Rejecting Adversarial Examples Robustly
Jiajun Lu
Theerasit Issaranon
David A. Forsyth
GAN
120
381
0
01 Apr 2017
Adversarial Transformation Networks: Learning to Generate Adversarial
  Examples
Adversarial Transformation Networks: Learning to Generate Adversarial Examples
S. Baluja
Ian S. Fischer
GAN
87
286
0
28 Mar 2017
Previous
123...798081
Next