ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.01059
48
0

XAI-Units: Benchmarking Explainability Methods with Unit Tests

1 June 2025
Jun Rui Lee
Sadegh Emami
Michael David Hollins
Timothy C. H. Wong
Carlos Ignacio Villalobos Sánchez
Francesca Toni
Dekai Zhang
Adam Dejl
ArXiv (abs)PDFHTML
Main:7 Pages
11 Figures
Bibliography:4 Pages
9 Tables
Appendix:4 Pages
Abstract

Feature attribution (FA) methods are widely used in explainable AI (XAI) to help users understand how the inputs of a machine learning model contribute to its outputs. However, different FA models often provide disagreeing importance scores for the same model. In the absence of ground truth or in-depth knowledge about the inner workings of the model, it is often difficult to meaningfully determine which of the different FA methods produce more suitable explanations in different contexts. As a step towards addressing this issue, we introduce the open-source XAI-Units benchmark, specifically designed to evaluate FA methods against diverse types of model behaviours, such as feature interactions, cancellations, and discontinuous outputs. Our benchmark provides a set of paired datasets and models with known internal mechanisms, establishing clear expectations for desirable attribution scores. Accompanied by a suite of built-in evaluation metrics, XAI-Units streamlines systematic experimentation and reveals how FA methods perform against distinct, atomic kinds of model reasoning, similar to unit tests in software engineering. Crucially, by using procedurally generated models tied to synthetic datasets, we pave the way towards an objective and reliable comparison of FA methods.

View on arXiv
@article{lee2025_2506.01059,
  title={ XAI-Units: Benchmarking Explainability Methods with Unit Tests },
  author={ Jun Rui Lee and Sadegh Emami and Michael David Hollins and Timothy C. H. Wong and Carlos Ignacio Villalobos Sánchez and Francesca Toni and Dekai Zhang and Adam Dejl },
  journal={arXiv preprint arXiv:2506.01059},
  year={ 2025 }
}
Comments on this paper