10
0

See What I Mean? CUE: A Cognitive Model of Understanding Explanations

Main:10 Pages
5 Figures
Bibliography:3 Pages
3 Tables
Abstract

As machine learning systems increasingly inform critical decisions, the need for human-understandable explanations grows. Current evaluations of Explainable AI (XAI) often prioritize technical fidelity over cognitive accessibility which critically affects users, in particular those with visual impairments. We propose CUE, a model for Cognitive Understanding of Explanations, linking explanation properties to cognitive sub-processes: legibility (perception), readability (comprehension), and interpretability (interpretation). In a study (N=455) testing heatmaps with varying colormaps (BWR, Cividis, Coolwarm), we found comparable task performance but lower confidence/effort for visually impaired users. Unlike expected, these gaps were not mitigated and sometimes worsened by accessibility-focused color maps like Cividis. These results challenge assumptions about perceptual optimization and support the need for adaptive XAI interfaces. They also validate CUE by demonstrating that altering explanation legibility affects understandability. We contribute: (1) a formalized cognitive model for explanation understanding, (2) an integrated definition of human-centered explanation properties, and (3) empirical evidence motivating accessible, user-tailored XAI.

View on arXiv
@article{labarta2025_2506.14775,
  title={ See What I Mean? CUE: A Cognitive Model of Understanding Explanations },
  author={ Tobias Labarta and Nhi Hoang and Katharina Weitz and Wojciech Samek and Sebastian Lapuschkin and Leander Weber },
  journal={arXiv preprint arXiv:2506.14775},
  year={ 2025 }
}
Comments on this paper