65
v1v2 (latest)

Activation-Space Uncertainty Quantification for Pretrained Networks

Richard Bergna
Stefan Depeweg
Sergio Calvo-Ordoñez
Jonathan Plenk
Alvaro Cartea
Jose Miguel Hernández-Lobato
Main:8 Pages
26 Figures
Bibliography:2 Pages
17 Tables
Appendix:21 Pages
Abstract

Reliable uncertainty estimates are crucial for deploying pretrained models; yet, many strong methods for quantifying uncertainty require retraining, Monte Carlo sampling, or expensive second-order computations and may alter a frozen backbone's predictions. To address this, we introduce Gaussian Process Activations (GAPA), a post-hoc method that shifts Bayesian modeling from weights to activations. GAPA replaces standard nonlinearities with Gaussian-process activations whose posterior mean exactly matches the original activation, preserving the backbone's point predictions by construction while providing closed-form epistemic variances in activation space. To scale to modern architectures, we use a sparse variational inducing-point approximation over cached training activations, combined with local k-nearest-neighbor subset conditioning, enabling deterministic single-pass uncertainty propagation without sampling, backpropagation, or second-order information. Across regression, classification, image segmentation, and language modeling, GAPA matches or outperforms strong post-hoc baselines in calibration and out-of-distribution detection while remaining efficient at test time.

View on arXiv
Comments on this paper