A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support

As the field of healthcare increasingly adopts artificial intelligence, it becomes important to understand which types of explanations increase transparency and empower users to develop confidence and trust in the predictions made by machine learning (ML) systems. In shared decision-making scenarios where doctors cooperate with ML systems to reach an appropriate decision, establishing mutual trust is crucial. In this paper, we explore different approaches to generating explanations in eXplainable AI (XAI) and make their underlying arguments explicit so that they can be evaluated by medical experts. In particular, we present the findings of a user study conducted with physicians to investigate their perceptions of various types of AI-generated explanations in the context of diagnostic decision support. The study aims to identify the most effective and useful explanations that enhance the diagnostic process. In the study, medical doctors filled out a survey to assess different types of explanations. Further, an interview was carried out post-survey to gain qualitative insights on the requirements of explanations incorporated in diagnostic decision support. Overall, the insights gained from this study contribute to understanding the types of explanations that are most effective.
View on arXiv@article{liedeker2025_2505.10188, title={ A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support }, author={ Felix Liedeker and Olivia Sanchez-Graillet and Moana Seidler and Christian Brandt and Jörg Wellmer and Philipp Cimiano }, journal={arXiv preprint arXiv:2505.10188}, year={ 2025 } }