The Delta method is a classical procedure for quantifying epistemic uncertainty in statistical models, but its direct application to deep neural networks is prevented by the large number of parameters . We propose a low cost variant of the Delta method applicable to -regularized deep neural networks based on the top eigenpairs of the Fisher information matrix. We address efficient computation of full-rank approximate eigendecompositions in terms of either the exact inverse Hessian, the inverse outer-products of gradients approximation or the so-called Sandwich estimator. Moreover, we provide a bound on the approximation error for the uncertainty of the predictive class probabilities. We observe that when the smallest eigenvalue of the Fisher information matrix is near the -regularization rate, the approximation error is close to zero even when . A demonstration of the methodology is presented using a TensorFlow implementation, and we show that meaningful rankings of images based on predictive uncertainty can be obtained for two LeNet-based neural networks using the MNIST and CIFAR-10 datasets. Further, we observe that false positives have on average a higher predictive epistemic uncertainty than true positives. This suggests that there is supplementing information in the uncertainty measure not captured by the classification alone.
View on arXiv