51
2

Explaining k-Nearest Neighbors: Abductive and Counterfactual Explanations

Abstract

Despite the wide use of kk-Nearest Neighbors as classification models, their explainability properties remain poorly understood from a theoretical perspective. While nearest neighbors classifiers offer interpretability from a "data perspective", in which the classification of an input vector xˉ\bar{x} is explained by identifying the vectors vˉ1,,vˉk\bar{v}_1, \ldots, \bar{v}_k in the training set that determine the classification of xˉ\bar{x}, we argue that such explanations can be impractical in high-dimensional applications, where each vector has hundreds or thousands of features and it is not clear what their relative importance is. Hence, we focus on understanding nearest neighbor classifications through a "feature perspective", in which the goal is to identify how the values of the features in xˉ\bar{x} affect its classification. Concretely, we study abductive explanations such as "minimum sufficient reasons", which correspond to sets of features in xˉ\bar{x} that are enough to guarantee its classification, and "counterfactual explanations" based on the minimum distance feature changes one would have to perform in xˉ\bar{x} to change its classification. We present a detailed landscape of positive and negative complexity results for counterfactual and abductive explanations, distinguishing between discrete and continuous feature spaces, and considering the impact of the choice of distance function involved. Finally, we show that despite some negative complexity results, Integer Quadratic Programming and SAT solving allow for computing explanations in practice.

View on arXiv
Comments on this paper