Visual counterfactual explainers (VCEs) are a straightforward and promising approach to enhancing the transparency of image classifiers. VCEs complement other types of explanations, such as feature attribution, by revealing the specific data transformations to which a machine learning model responds most strongly. In this paper, we argue that existing VCEs focus too narrowly on optimizing sample quality or change minimality; they fail to consider the more holistic desiderata for an explanation, such as fidelity, understandability, and sufficiency. To address this shortcoming, we explore new mechanisms for counterfactual generation and investigate how they can help fulfill these desiderata. We combine these mechanisms into a novel 'smooth counterfactual explorer' (SCE) algorithm and demonstrate its effectiveness through systematic evaluations on synthetic and real data.
View on arXiv@article{bender2025_2506.14698, title={ Towards Desiderata-Driven Design of Visual Counterfactual Explainers }, author={ Sidney Bender and Jan Herrmann and Klaus-Robert Müller and Grégoire Montavon }, journal={arXiv preprint arXiv:2506.14698}, year={ 2025 } }