57
0

ExplainReduce: Summarising local explanations via proxies

Abstract

Most commonly used non-linear machine learning methods are closed-box models, uninterpretable to humans. The field of explainable artificial intelligence (XAI) aims to develop tools to examine the inner workings of these closed boxes. An often-used model-agnostic approach to XAI involves using simple models as local approximations to produce so-called local explanations; examples of this approach include LIME, SHAP, and SLISEMAP. This paper shows how a large set of local explanations can be reduced to a small "proxy set" of simple models, which can act as a generative global explanation. This reduction procedure, ExplainReduce, can be formulated as an optimisation problem and approximated efficiently using greedy heuristics.

View on arXiv
@article{seppäläinen2025_2502.10311,
  title={ ExplainReduce: Summarising local explanations via proxies },
  author={ Lauri Seppäläinen and Mudong Guo and Kai Puolamäki },
  journal={arXiv preprint arXiv:2502.10311},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.