On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola

Abstract
We argue that robustness of explanations---i.e., that similar inputs should give rise to similar explanations---is a key desideratum for interpretability. We introduce metrics to quantify robustness and demonstrate that current methods do not perform well according to these metrics. Finally, we propose ways that robustness can be enforced on existing interpretability approaches.
View on arXivComments on this paper