δ-CLUE: Diverse Sets of Explanations for Uncertainty Estimates

Abstract
To interpret uncertainty estimates from differentiable probabilistic models, recent work has proposed generating Counterfactual Latent Uncertainty Explanations (CLUEs). However, for a single input, such approaches could output a variety of explanations due to the lack of constraints placed on the explanation. Here we augment the original CLUE approach, to provide what we call -CLUE. CLUE indicates way to change an input, while remaining on the data manifold, such that the model becomes more confident about its prediction. We instead return a of plausible CLUEs: multiple, diverse inputs that are within a ball of the original input in latent space, all yielding confident predictions.
View on arXivComments on this paper