127

EviNAM: Intelligibility and Uncertainty via Evidential Neural Additive Models

Sören Schleibaum
Anton Frederik Thielmann
Julian Teusch
Benjamin Säfken
Jörg P. Müller
Main:6 Pages
2 Figures
Bibliography:2 Pages
3 Tables
Abstract

Intelligibility and accurate uncertainty estimation are crucial for reliable decision-making. In this paper, we propose EviNAM, an extension of evidential learning that integrates the interpretability of Neural Additive Models (NAMs) with principled uncertainty estimation. Unlike standard Bayesian neural networks and previous evidential methods, EviNAM enables, in a single pass, both the estimation of the aleatoric and epistemic uncertainty as well as explicit feature contributions. Experiments on synthetic and real data demonstrate that EviNAM matches state-of-the-art predictive performance. While we focus on regression, our method extends naturally to classification and generalized additive models, offering a path toward more intelligible and trustworthy predictions.

View on arXiv
Comments on this paper