Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.08285
Cited By
v1
v2 (latest)
A Survey of Neural Network Robustness Assessment in Image Recognition
12 April 2024
Jie Wang
Jun Ai
Minyan Lu
Haoran Su
Dan Yu
Yutao Zhang
Junda Zhu
Jingyu Liu
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"A Survey of Neural Network Robustness Assessment in Image Recognition"
20 / 70 papers shown
Title
Output Reachable Set Estimation and Verification for Multi-Layer Neural Networks
Weiming Xiang
Hoang-Dung Tran
Taylor T. Johnson
108
294
0
09 Aug 2017
An approach to reachability analysis for feed-forward ReLU neural networks
A. Lomuscio
Lalit Maganti
65
359
0
22 Jun 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
310
12,117
0
19 Jun 2017
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
Shiyu Liang
Yixuan Li
R. Srikant
UQCV
OODD
171
2,080
0
08 Jun 2017
MagNet: a Two-Pronged Defense against Adversarial Examples
Dongyu Meng
Hao Chen
AAML
48
1,208
0
25 May 2017
Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr
Alexey Kurakin
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
177
2,728
0
19 May 2017
DeepXplore: Automated Whitebox Testing of Deep Learning Systems
Kexin Pei
Yinzhi Cao
Junfeng Yang
Suman Jana
AAML
102
1,371
0
18 May 2017
Maximum Resilience of Artificial Neural Networks
Chih-Hong Cheng
Georg Nührenberg
Harald Ruess
AAML
111
284
0
28 Apr 2017
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
318
1,873
0
03 Feb 2017
Safety Verification of Deep Neural Networks
Xiaowei Huang
Marta Kwiatkowska
Sen Wang
Min Wu
AAML
228
944
0
21 Oct 2016
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
Dan Hendrycks
Kevin Gimpel
UQCV
166
3,468
0
07 Oct 2016
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
545
5,909
0
08 Jul 2016
Measuring Neural Net Robustness with Constraints
Osbert Bastani
Yani Andrew Ioannou
Leonidas Lampropoulos
Dimitrios Vytiniotis
A. Nori
A. Criminisi
AAML
89
424
0
24 May 2016
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
151
4,903
0
14 Nov 2015
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Nicolas Papernot
Patrick McDaniel
Xi Wu
S. Jha
A. Swami
AAML
113
3,077
0
14 Nov 2015
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
362
19,723
0
09 Mar 2015
Analysis of classifiers' robustness to adversarial perturbations
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
95
361
0
09 Feb 2015
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
280
19,107
0
20 Dec 2014
On the Number of Linear Regions of Deep Neural Networks
Guido Montúfar
Razvan Pascanu
Kyunghyun Cho
Yoshua Bengio
90
1,254
0
08 Feb 2014
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
277
14,961
1
21 Dec 2013
Previous
1
2