0
0

Explanations Go Linear: Interpretable and Individual Latent Encoding for Post-hoc Explainability

Simone Piaggesi
Riccardo Guidotti
Fosca Giannotti
Dino Pedreschi
Abstract

Post-hoc explainability is essential for understanding black-box machine learning models. Surrogate-based techniques are widely used for local and global model-agnostic explanations but have significant limitations. Local surrogates capture non-linearities but are computationally expensive and sensitive to parameters, while global surrogates are more efficient but struggle with complex local behaviors. In this paper, we present ILLUME, a flexible and interpretable framework grounded in representation learning, that can be integrated with various surrogate models to provide explanations for any black-box classifier. Specifically, our approach combines a globally trained surrogate with instance-specific linear transformations learned with a meta-encoder to generate both local and global explanations. Through extensive empirical evaluations, we demonstrate the effectiveness of ILLUME in producing feature attributions and decision rules that are not only accurate but also robust and faithful to the black-box, thus providing a unified explanation framework that effectively addresses the limitations of traditional surrogate methods.

View on arXiv
@article{piaggesi2025_2504.20667,
  title={ Explanations Go Linear: Interpretable and Individual Latent Encoding for Post-hoc Explainability },
  author={ Simone Piaggesi and Riccardo Guidotti and Fosca Giannotti and Dino Pedreschi },
  journal={arXiv preprint arXiv:2504.20667},
  year={ 2025 }
}
Comments on this paper