14
9

How Many and Which Training Points Would Need to be Removed to Flip this Prediction?

Abstract

We consider the problem of identifying a minimal subset of training data St\mathcal{S}_t such that if the instances comprising St\mathcal{S}_t had been removed prior to training, the categorization of a given test point xtx_t would have been different. Identifying such a set may be of interest for a few reasons. First, the cardinality of St\mathcal{S}_t provides a measure of robustness (if St|\mathcal{S}_t| is small for xtx_t, we might be less confident in the corresponding prediction), which we show is correlated with but complementary to predicted probabilities. Second, interrogation of St\mathcal{S}_t may provide a novel mechanism for contesting a particular model prediction: If one can make the case that the points in St\mathcal{S}_t are wrongly labeled or irrelevant, this may argue for overturning the associated prediction. Identifying St\mathcal{S}_t via brute-force is intractable. We propose comparatively fast approximation methods to find St\mathcal{S}_t based on influence functions, and find that -- for simple convex text classification models -- these approaches can often successfully identify relatively small sets of training examples which, if removed, would flip the prediction.

View on arXiv
Comments on this paper