39
0

Cooperative Bayesian and variance networks disentangle aleatoric and epistemic uncertainties

Abstract

Real-world data contains aleatoric uncertainty - irreducible noise arising from imperfect measurements or from incomplete knowledge about the data generation process. Mean variance estimation (MVE) networks can learn this type of uncertainty but require ad-hoc regularization strategies to avoid overfitting and are unable to predict epistemic uncertainty (model uncertainty). Conversely, Bayesian neural networks predict epistemic uncertainty but are notoriously difficult to train due to the approximate nature of Bayesian inference. We propose to cooperatively train a variance network with a Bayesian neural network and demonstrate that the resulting model disentangles aleatoric and epistemic uncertainties while improving the mean estimation. We demonstrate the effectiveness and scalability of this method across a diverse range of datasets, including a time-dependent heteroscedastic regression dataset we created where the aleatoric uncertainty is known. The proposed method is straightforward to implement, robust, and adaptable to various model architectures.

View on arXiv
@article{yi2025_2505.02743,
  title={ Cooperative Bayesian and variance networks disentangle aleatoric and epistemic uncertainties },
  author={ Jiaxiang Yi and Miguel A. Bessa },
  journal={arXiv preprint arXiv:2505.02743},
  year={ 2025 }
}
Comments on this paper