Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1812.05720
Cited By
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
13 December 2018
Matthias Hein
Maksym Andriushchenko
Julian Bitterwolf
OODD
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem"
49 / 349 papers shown
Title
Adaptive Label Smoothing
Ujwal Krothapalli
A. Lynn Abbott
33
9
0
14 Sep 2020
A Survey on Assessing the Generalization Envelope of Deep Neural Networks: Predictive Uncertainty, Out-of-distribution and Adversarial Samples
Julia Lust
A. P. Condurache
UQCV
AAML
AI4CE
29
7
0
21 Aug 2020
A General Framework For Detecting Anomalous Inputs to DNN Classifiers
Jayaram Raghuram
Varun Chandrasekaran
S. Jha
Suman Banerjee
AAML
28
32
0
29 Jul 2020
Certifiably Adversarially Robust Detection of Out-of-Distribution Data
Julian Bitterwolf
Alexander Meinke
Matthias Hein
10
9
0
16 Jul 2020
Nested Learning For Multi-Granular Tasks
Raphaël Achddou
J. Matias Di Martino
Guillermo Sapiro
19
1
0
13 Jul 2020
Revisiting One-vs-All Classifiers for Predictive Uncertainty and Out-of-Distribution Detection in Neural Networks
Shreyas Padhy
Zachary Nado
Jie Jessie Ren
J. Liu
Jasper Snoek
Balaji Lakshminarayanan
UQCV
9
45
0
10 Jul 2020
Soft Labeling Affects Out-of-Distribution Detection of Deep Neural Networks
Doyup Lee
Yeongjae Cheon
12
6
0
07 Jul 2020
ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining
Jiefeng Chen
Yixuan Li
Xi Wu
Yingyu Liang
S. Jha
OODD
24
135
0
26 Jun 2020
Hyperparameter Ensembles for Robustness and Uncertainty Quantification
F. Wenzel
Jasper Snoek
Dustin Tran
Rodolphe Jenatton
UQCV
33
204
0
24 Jun 2020
Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness
Jeremiah Zhe Liu
Zi Lin
Shreyas Padhy
Dustin Tran
Tania Bedrax-Weiss
Balaji Lakshminarayanan
UQCV
BDL
37
437
0
17 Jun 2020
Revisiting Explicit Regularization in Neural Networks for Well-Calibrated Predictive Uncertainty
Taejong Joo
U. Chung
BDL
UQCV
22
0
0
11 Jun 2020
A t-distribution based operator for enhancing out of distribution robustness of neural network classifiers
Niccolò Antonello
Philip N. Garner
26
4
0
09 Jun 2020
Entropic Out-of-Distribution Detection: Seamless Detection of Unknown Examples
David Macêdo
T. I. Ren
Cleber Zanchettin
Adriano Oliveira
Teresa B Ludermir
OODD
12
22
0
07 Jun 2020
Domain Knowledge Alleviates Adversarial Attacks in Multi-Label Classifiers
S. Melacci
Gabriele Ciravegna
Angelo Sotgiu
Ambra Demontis
Battista Biggio
Marco Gori
Fabio Roli
4
14
0
06 Jun 2020
ReLU Code Space: A Basis for Rating Network Quality Besides Accuracy
Natalia Shepeleva
Werner Zellinger
Michal Lewandowski
Bernhard A. Moser
22
3
0
20 May 2020
A Review of Computer Vision Methods in Network Security
Jiawei Zhao
Rahat Masood
Suranga Seneviratne
AAML
22
47
0
07 May 2020
Shortcut Learning in Deep Neural Networks
Robert Geirhos
J. Jacobsen
Claudio Michaelis
R. Zemel
Wieland Brendel
Matthias Bethge
Felix Wichmann
55
1,991
0
16 Apr 2020
Robust Out-of-distribution Detection for Neural Networks
Jiefeng Chen
Yixuan Li
Xi Wu
Yingyu Liang
S. Jha
OODD
161
84
0
21 Mar 2020
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
75
99
0
20 Mar 2020
Synthesize then Compare: Detecting Failures and Anomalies for Semantic Segmentation
Yingda Xia
Yi Zhang
Fengze Liu
Wei Shen
Alan Yuille
UQCV
19
148
0
18 Mar 2020
Mix-n-Match: Ensemble and Compositional Methods for Uncertainty Calibration in Deep Learning
Jize Zhang
B. Kailkhura
T. Y. Han
UQCV
30
219
0
16 Mar 2020
No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting with Adversarial Attacks
Siqi Liu
A. Setio
Florin-Cristian Ghesu
Eli Gibson
Sasa Grbic
Bogdan Georgescu
Dorin Comaniciu
AAML
OOD
31
40
0
08 Mar 2020
Dropout Strikes Back: Improved Uncertainty Estimation via Diversity Sampling
Kirill Fedyanin
Evgenii Tsymbalov
Maxim Panov
UQCV
19
7
0
06 Mar 2020
Fast Predictive Uncertainty for Classification with Bayesian Deep Networks
Marius Hobbhahn
Agustinus Kristiadi
Philipp Hennig
BDL
UQCV
76
31
0
02 Mar 2020
Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks
Agustinus Kristiadi
Matthias Hein
Philipp Hennig
BDL
UQCV
33
277
0
24 Feb 2020
On the Role of Dataset Quality and Heterogeneity in Model Confidence
Yuan Zhao
Jiasi Chen
Samet Oymak
27
12
0
23 Feb 2020
On Last-Layer Algorithms for Classification: Decoupling Representation from Uncertainty Estimation
N. Brosse
C. Riquelme
Alice Martin
Sylvain Gelly
Eric Moulines
BDL
OOD
UQCV
19
33
0
22 Jan 2020
Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks
Oliver Willers
Sebastian Sudholt
Shervin Raafatnia
Stephanie Abrecht
28
80
0
22 Jan 2020
Safe Robot Navigation via Multi-Modal Anomaly Detection
Lorenz Wellhausen
René Ranftl
Marco Hutter
15
77
0
22 Jan 2020
Practical Solutions for Machine Learning Safety in Autonomous Vehicles
Sina Mohseni
Mandar Pitale
Vasu Singh
Zhangyang Wang
30
67
0
20 Dec 2019
On-manifold Adversarial Data Augmentation Improves Uncertainty Calibration
Kanil Patel
William H. Beluch
Dan Zhang
Michael Pfeiffer
Bin Yang
UQCV
27
30
0
16 Dec 2019
Playing it Safe: Adversarial Robustness with an Abstain Option
Cassidy Laidlaw
S. Feizi
AAML
31
20
0
25 Nov 2019
Detecting Out-of-Distribution Inputs in Deep Neural Networks Using an Early-Layer Output
Vahdat Abdelzad
Krzysztof Czarnecki
Rick Salay
Taylor Denouden
Sachin Vernekar
Buu Phan
OODD
24
45
0
23 Oct 2019
Toward Metrics for Differentiating Out-of-Distribution Sets
Mahdieh Abbasi
Changjian Shui
Arezoo Rajabi
Christian Gagné
R. Bobba
OODD
28
4
0
18 Oct 2019
Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks
David Stutz
Matthias Hein
Bernt Schiele
AAML
19
5
0
14 Oct 2019
Out-of-distribution Detection in Classifiers via Generation
Sachin Vernekar
Ashish Gaurav
Vahdat Abdelzad
Taylor Denouden
Rick Salay
Krzysztof Czarnecki
OODD
27
83
0
09 Oct 2019
Towards neural networks that provably know when they don't know
Alexander Meinke
Matthias Hein
OODD
33
139
0
26 Sep 2019
Model-Based and Data-Driven Strategies in Medical Image Computing
Daniel Rueckert
Julia A. Schnabel
OOD
MedIm
AI4CE
28
50
0
23 Sep 2019
Generating Accurate Pseudo-labels in Semi-Supervised Learning and Avoiding Overconfident Predictions via Hermite Polynomial Activations
Vishnu Suresh Lokhande
Songwong Tasneeyapant
Abhay Venkatesh
Sathya Ravi
Vikas Singh
21
29
0
12 Sep 2019
Entropic Out-of-Distribution Detection
David Macêdo
T. I. Ren
Cleber Zanchettin
Adriano Oliveira
Teresa B Ludermir
OODD
UQCV
25
31
0
15 Aug 2019
Interpretable Image Recognition with Hierarchical Prototypes
Peter Hase
Chaofan Chen
Oscar Li
Cynthia Rudin
VLM
17
110
0
25 Jun 2019
Non-Parametric Calibration for Classification
Jonathan Wenger
Hedvig Kjellström
Rudolph Triebel
UQCV
45
79
0
12 Jun 2019
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
Maksym Andriushchenko
Matthias Hein
25
61
0
08 Jun 2019
Analysis of Confident-Classifiers for Out-of-distribution Detection
Sachin Vernekar
Ashish Gaurav
Taylor Denouden
Buu Phan
Vahdat Abdelzad
Rick Salay
Krzysztof Czarnecki
OODD
18
18
0
27 Apr 2019
Exploring Uncertainty Measures for Image-Caption Embedding-and-Retrieval Task
Kenta Hama
Takashi Matsubara
K. Uehara
Jianfei Cai
BDL
UQCV
21
6
0
09 Apr 2019
A witness function based construction of discriminative models using Hermite polynomials
H. Mhaskar
A. Cloninger
Xiuyuan Cheng
24
9
0
10 Jan 2019
Disentangling Adversarial Robustness and Generalization
David Stutz
Matthias Hein
Bernt Schiele
AAML
OOD
194
274
0
03 Dec 2018
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
276
5,675
0
05 Dec 2016
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,145
0
06 Jun 2015
Previous
1
2
3
4
5
6
7