Deceptive AI Explanations: Creation and Detection
Artificial intelligence (AI) comes with great opportunities but also great risks. Automatically generated explanations for decisions are deemed helpful to better understand AI, increasing transparency and fostering trust. However, given e.g. economic incentives to create dishonest AI, can we trust its explanations? To address this issue, our paper investigates to what extent models of AI, i.e. deep learning, and existing instruments to increase transparency regarding AI decisions can be used to create and detect deceptive explanations. For empirical evaluation, we focus on text classification and alter explanations originating from GradCAM, a well-established technique for creating explanations in neural networks. We then evaluate the effect of deceptive explanations on users in an experiment with 200 participants. Our findings confirm that deceptive explanations can indeed fool humans while machine learning methods can detect seemingly minor attempts of deception with accuracy that exceeds 80\% given sufficient domain knowledge in the form of training data. Without domain knowledge, one can still infer inconsistencies in the explanations in an unsupervised manner given basic knowledge on the allegedly deceptive model.
View on arXiv