2
0

In defence of post-hoc explanations in medical AI

Joshua Hatherley
Lauritz Munch
Jens Christian Bjerring
Abstract

Since the early days of the Explainable AI movement, post-hoc explanations have been praised for their potential to improve user understanding, promote trust, and reduce patient safety risks in black box medical AI systems. Recently, however, critics have argued that the benefits of post-hoc explanations are greatly exaggerated since they merely approximate, rather than replicate, the actual reasoning processes that black box systems take to arrive at their outputs. In this article, we aim to defend the value of post-hoc explanations against this recent critique. We argue that even if post-hoc explanations do not replicate the exact reasoning processes of black box systems, they can still improve users' functional understanding of black box systems, increase the accuracy of clinician-AI teams, and assist clinicians in justifying their AI-informed decisions. While post-hoc explanations are not a "silver bullet" solution to the black box problem in medical AI, we conclude that they remain a useful strategy for addressing the black box problem in medical AI.

View on arXiv
@article{hatherley2025_2504.20741,
  title={ In defence of post-hoc explanations in medical AI },
  author={ Joshua Hatherley and Lauritz Munch and Jens Christian Bjerring },
  journal={arXiv preprint arXiv:2504.20741},
  year={ 2025 }
}
Comments on this paper