This paper introduces a formal notion of fixed point explanations, inspired by the "why regress" principle, to assess, through recursive applications, the stability of the interplay between a model and its explainer. Fixed point explanations satisfy properties like minimality, stability, and faithfulness, revealing hidden model behaviours and explanatory weaknesses. We define convergence conditions for several classes of explainers, from feature-based to mechanistic tools like Sparse AutoEncoders, and we report quantitative and qualitative results.
View on arXiv@article{malfa2025_2505.12421, title={ Fixed Point Explainability }, author={ Emanuele La Malfa and Jon Vadillo and Marco Molinari and Michael Wooldridge }, journal={arXiv preprint arXiv:2505.12421}, year={ 2025 } }