Feature attribution (FA) methods are widely used in explainable AI (XAI) to help users understand how the inputs of a machine learning model contribute to its outputs. However, different FA models often provide disagreeing importance scores for the same model. In the absence of ground truth or in-depth knowledge about the inner workings of the model, it is often difficult to meaningfully determine which of the different FA methods produce more suitable explanations in different contexts. As a step towards addressing this issue, we introduce the open-source XAI-Units benchmark, specifically designed to evaluate FA methods against diverse types of model behaviours, such as feature interactions, cancellations, and discontinuous outputs. Our benchmark provides a set of paired datasets and models with known internal mechanisms, establishing clear expectations for desirable attribution scores. Accompanied by a suite of built-in evaluation metrics, XAI-Units streamlines systematic experimentation and reveals how FA methods perform against distinct, atomic kinds of model reasoning, similar to unit tests in software engineering. Crucially, by using procedurally generated models tied to synthetic datasets, we pave the way towards an objective and reliable comparison of FA methods.
View on arXiv@article{lee2025_2506.01059, title={ XAI-Units: Benchmarking Explainability Methods with Unit Tests }, author={ Jun Rui Lee and Sadegh Emami and Michael David Hollins and Timothy C. H. Wong and Carlos Ignacio Villalobos Sánchez and Francesca Toni and Dekai Zhang and Adam Dejl }, journal={arXiv preprint arXiv:2506.01059}, year={ 2025 } }