ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1801.10578
  4. Cited By
Evaluating the Robustness of Neural Networks: An Extreme Value Theory
  Approach

Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

31 January 2018
Tsui-Wei Weng
Huan Zhang
Pin-Yu Chen
Jinfeng Yi
D. Su
Yupeng Gao
Cho-Jui Hsieh
Luca Daniel
    AAML
ArXivPDFHTML

Papers citing "Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach"

46 / 96 papers shown
Title
Training a Resilient Q-Network against Observational Interference
Training a Resilient Q-Network against Observational Interference
Chao-Han Huck Yang
I-Te Danny Hung
Ouyang Yi
Pin-Yu Chen
OOD
26
14
0
18 Feb 2021
Increasing the Confidence of Deep Neural Networks by Coverage Analysis
Increasing the Confidence of Deep Neural Networks by Coverage Analysis
Giulio Rossolini
Alessandro Biondi
Giorgio Buttazzo
AAML
26
13
0
28 Jan 2021
ROBY: Evaluating the Robustness of a Deep Model by its Decision
  Boundaries
ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries
Jinyin Chen
Zhen Wang
Haibin Zheng
Jun Xiao
Zhaoyan Ming
AAML
19
5
0
18 Dec 2020
Recent Advances in Understanding Adversarial Robustness of Deep Neural
  Networks
Recent Advances in Understanding Adversarial Robustness of Deep Neural Networks
Tao Bai
Jinqi Luo
Jun Zhao
AAML
49
8
0
03 Nov 2020
Evaluating Robustness of Predictive Uncertainty Estimation: Are
  Dirichlet-based Models Reliable?
Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?
Anna-Kathrin Kopetzki
Bertrand Charpentier
Daniel Zügner
Sandhya Giri
Stephan Günnemann
23
45
0
28 Oct 2020
Planning with Learned Dynamics: Probabilistic Guarantees on Safety and
  Reachability via Lipschitz Constants
Planning with Learned Dynamics: Probabilistic Guarantees on Safety and Reachability via Lipschitz Constants
Craig Knuth
Glen Chou
N. Ozay
Dmitry Berenson
24
33
0
18 Oct 2020
Adversarial Boot Camp: label free certified robustness in one epoch
Adversarial Boot Camp: label free certified robustness in one epoch
Ryan Campbell
Chris Finlay
Adam M. Oberman
AAML
25
0
0
05 Oct 2020
Enhancing Mixup-based Semi-Supervised Learning with Explicit Lipschitz
  Regularization
Enhancing Mixup-based Semi-Supervised Learning with Explicit Lipschitz Regularization
P. Gyawali
S. Ghimire
Linwei Wang
AAML
31
7
0
23 Sep 2020
Adversarially Robust Neural Architectures
Adversarially Robust Neural Architectures
Minjing Dong
Yanxi Li
Yunhe Wang
Chang Xu
AAML
OOD
42
48
0
02 Sep 2020
Sampling-based Reachability Analysis: A Random Set Theory Approach with
  Adversarial Sampling
Sampling-based Reachability Analysis: A Random Set Theory Approach with Adversarial Sampling
T. Lew
Marco Pavone
AAML
30
53
0
24 Aug 2020
Optimizing Information Loss Towards Robust Neural Networks
Optimizing Information Loss Towards Robust Neural Networks
Philip Sperl
Konstantin Böttinger
AAML
18
3
0
07 Aug 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
27
73
0
07 Aug 2020
AdvFoolGen: Creating Persistent Troubles for Deep Classifiers
AdvFoolGen: Creating Persistent Troubles for Deep Classifiers
Yuzhen Ding
Nupur Thakur
Baoxin Li
AAML
24
3
0
20 Jul 2020
Interpreting and Disentangling Feature Components of Various Complexity
  from DNNs
Interpreting and Disentangling Feature Components of Various Complexity from DNNs
Jie Ren
Mingjie Li
Zexu Liu
Quanshi Zhang
CoGe
19
18
0
29 Jun 2020
Simple and Principled Uncertainty Estimation with Deterministic Deep
  Learning via Distance Awareness
Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness
Jeremiah Zhe Liu
Zi Lin
Shreyas Padhy
Dustin Tran
Tania Bedrax-Weiss
Balaji Lakshminarayanan
UQCV
BDL
37
437
0
17 Jun 2020
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Jingfeng Zhang
Xilie Xu
Bo Han
Gang Niu
Li-zhen Cui
Masashi Sugiyama
Mohan S. Kankanhalli
AAML
33
397
0
26 Feb 2020
Semialgebraic Optimization for Lipschitz Constants of ReLU Networks
Semialgebraic Optimization for Lipschitz Constants of ReLU Networks
Tong Chen
J. Lasserre
Victor Magron
Edouard Pauwels
36
3
0
10 Feb 2020
Softmax-based Classification is k-means Clustering: Formal Proof,
  Consequences for Adversarial Attacks, and Improvement through Centroid Based
  Tailoring
Softmax-based Classification is k-means Clustering: Formal Proof, Consequences for Adversarial Attacks, and Improvement through Centroid Based Tailoring
Sibylle Hess
W. Duivesteijn
Decebal Constantin Mocanu
20
12
0
07 Jan 2020
There is Limited Correlation between Coverage and Robustness for Deep
  Neural Networks
There is Limited Correlation between Coverage and Robustness for Deep Neural Networks
Yizhen Dong
Peixin Zhang
Jingyi Wang
Shuang Liu
Jun Sun
Jianye Hao
Xinyu Wang
Li Wang
J. Dong
Ting Dai
OOD
AAML
21
32
0
14 Nov 2019
Robustness Guarantees for Deep Neural Networks on Videos
Robustness Guarantees for Deep Neural Networks on Videos
Min Wu
Marta Z. Kwiatkowska
AAML
19
22
0
28 Jun 2019
Lower Bounds for Adversarially Robust PAC Learning
Lower Bounds for Adversarially Robust PAC Learning
Dimitrios I. Diochnos
Saeed Mahloujifar
Mohammad Mahmoody
AAML
27
26
0
13 Jun 2019
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual
  Perspective
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective
Lu Wang
Xuanqing Liu
Jinfeng Yi
Zhi-Hua Zhou
Cho-Jui Hsieh
AAML
23
22
0
10 Jun 2019
Scaleable input gradient regularization for adversarial robustness
Scaleable input gradient regularization for adversarial robustness
Chris Finlay
Adam M. Oberman
AAML
16
77
0
27 May 2019
POPQORN: Quantifying Robustness of Recurrent Neural Networks
POPQORN: Quantifying Robustness of Recurrent Neural Networks
Ching-Yun Ko
Zhaoyang Lyu
Tsui-Wei Weng
Luca Daniel
Ngai Wong
Dahua Lin
AAML
17
75
0
17 May 2019
Plug-and-Play Methods Provably Converge with Properly Trained Denoisers
Plug-and-Play Methods Provably Converge with Properly Trained Denoisers
Ernest K. Ryu
Jialin Liu
Sicheng Wang
Xiaohan Chen
Zhangyang Wang
W. Yin
AI4CE
22
348
0
14 May 2019
Evaluating Robustness of Deep Image Super-Resolution against Adversarial
  Attacks
Evaluating Robustness of Deep Image Super-Resolution against Adversarial Attacks
Jun-Ho Choi
Huan Zhang
Jun-Hyuk Kim
Cho-Jui Hsieh
Jong-Seok Lee
AAML
SupR
24
69
0
12 Apr 2019
Algorithms for Verifying Deep Neural Networks
Algorithms for Verifying Deep Neural Networks
Changliu Liu
Tomer Arnon
Christopher Lazarus
Christopher A. Strong
Clark W. Barrett
Mykel J. Kochenderfer
AAML
36
392
0
15 Mar 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
17
1,992
0
08 Feb 2019
PROVEN: Certifying Robustness of Neural Networks with a Probabilistic
  Approach
PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach
Tsui-Wei Weng
Pin-Yu Chen
Lam M. Nguyen
M. Squillante
Ivan Oseledets
Luca Daniel
AAML
21
30
0
18 Dec 2018
Wireless Network Intelligence at the Edge
Wireless Network Intelligence at the Edge
Jihong Park
S. Samarakoon
M. Bennis
Mérouane Debbah
21
518
0
07 Dec 2018
MixTrain: Scalable Training of Verifiably Robust Neural Networks
MixTrain: Scalable Training of Verifiably Robust Neural Networks
Yue Zhang
Yizheng Chen
Ahmed Abdou
Mohsen Guizani
AAML
21
23
0
06 Nov 2018
Efficient Neural Network Robustness Certification with General
  Activation Functions
Efficient Neural Network Robustness Certification with General Activation Functions
Huan Zhang
Tsui-Wei Weng
Pin-Yu Chen
Cho-Jui Hsieh
Luca Daniel
AAML
11
743
0
02 Nov 2018
On Extensions of CLEVER: A Neural Network Robustness Evaluation
  Algorithm
On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm
Tsui-Wei Weng
Huan Zhang
Pin-Yu Chen
A. Lozano
Cho-Jui Hsieh
Luca Daniel
28
10
0
19 Oct 2018
Characterizing Adversarial Examples Based on Spatial Consistency
  Information for Semantic Segmentation
Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation
Chaowei Xiao
Ruizhi Deng
Bo-wen Li
Feng Yu
M. Liu
D. Song
AAML
16
99
0
11 Oct 2018
Improved robustness to adversarial examples using Lipschitz regularization of the loss
Chris Finlay
Adam M. Oberman
B. Abbasi
24
34
0
01 Oct 2018
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the
  Robustness of 18 Deep Image Classification Models
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
D. Su
Huan Zhang
Hongge Chen
Jinfeng Yi
Pin-Yu Chen
Yupeng Gao
VLM
40
388
0
05 Aug 2018
On the Robustness of Interpretability Methods
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
30
522
0
21 Jun 2018
Lipschitz regularity of deep neural networks: analysis and efficient
  estimation
Lipschitz regularity of deep neural networks: analysis and efficient estimation
Kevin Scaman
Aladin Virmaux
22
516
0
28 May 2018
Towards Fast Computation of Certified Robustness for ReLU Networks
Towards Fast Computation of Certified Robustness for ReLU Networks
Tsui-Wei Weng
Huan Zhang
Hongge Chen
Zhao Song
Cho-Jui Hsieh
Duane S. Boning
Inderjit S. Dhillon
Luca Daniel
AAML
33
686
0
25 Apr 2018
Gradient Masking Causes CLEVER to Overestimate Adversarial Perturbation
  Size
Gradient Masking Causes CLEVER to Overestimate Adversarial Perturbation Size
Ian Goodfellow
AAML
11
36
0
21 Apr 2018
Generative Adversarial Perturbations
Generative Adversarial Perturbations
Omid Poursaeed
Isay Katsman
Bicheng Gao
Serge J. Belongie
AAML
GAN
WIGM
31
351
0
06 Dec 2017
Towards Robust Neural Networks via Random Self-ensemble
Towards Robust Neural Networks via Random Self-ensemble
Xuanqing Liu
Minhao Cheng
Huan Zhang
Cho-Jui Hsieh
FedML
AAML
43
418
0
02 Dec 2017
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
249
1,838
0
03 Feb 2017
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
296
3,112
0
04 Nov 2016
Safety Verification of Deep Neural Networks
Safety Verification of Deep Neural Networks
Xiaowei Huang
Marta Kwiatkowska
Sen Wang
Min Wu
AAML
180
932
0
21 Oct 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
293
5,842
0
08 Jul 2016
Previous
12