49
33

The Explanation Game: Explaining Machine Learning Models with Cooperative Game Theory

Ankur Taly
Abstract

A number of techniques have been proposed to explain a machine learning (ML) model's prediction by attributing it to the corresponding input features. Popular among these are techniques that apply the Shapley value method from cooperative game theory. While existing papers focus on the axiomatic motivation of Shapley values, and efficient techniques for computing them, they neither justify the game formulations used nor address the uncertainty implicit in their methods' outputs. For instance, the SHAP algorithm's formulation may give substantial attributions to features that play no role in a model. Furthermore, without infinite data and computation, SHAP attributions are approximations subject to hitherto uncharacterized uncertainty. In this work, we illustrate how subtle differences in the underlying game formulations of existing methods can cause large differences in attribution for a prediction. We then present a general game formulation that unifies existing methods. Using the primitive of single-reference games, we decompose the Shapley values of the general game formulation into Shapley values of single-reference games. This decomposition enables us to introduce confidence intervals to quantify the uncertainty in estimated attributions. Additionally, this decomposition enables contrastive explanations of a prediction through comparisons with different groups of reference inputs. We tie this idea to classic work on Norm Theory in cognitive psychology, and propose a general framework for generating explanations for ML models, called formulate, approximate, and explain (FAE).

View on arXiv
Comments on this paper