Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1810.03292
Cited By
Sanity Checks for Saliency Maps
8 October 2018
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Sanity Checks for Saliency Maps"
50 / 357 papers shown
Title
Quantitative and Qualitative Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis
Amitojdeep Singh
J. Balaji
M. Rasheed
Varadharajan Jayakumar
R. Raman
Vasudevan Lakshminarayanan
BDL
XAI
FAtt
9
29
0
26 Sep 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
29
93
0
22 Sep 2020
Contextual Semantic Interpretability
Diego Marcos
Ruth C. Fong
Sylvain Lobry
Rémi Flamary
Nicolas Courty
D. Tuia
SSL
17
27
0
18 Sep 2020
Captum: A unified and generic model interpretability library for PyTorch
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
...
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
FAtt
29
821
0
16 Sep 2020
Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset
Erico Tjoa
Cuntai Guan
XAI
FAtt
11
27
0
07 Sep 2020
Model extraction from counterfactual explanations
Ulrich Aïvodji
Alexandre Bolot
Sébastien Gambs
MIACV
MLAU
30
51
0
03 Sep 2020
Survey of XAI in digital pathology
Milda Pocevičiūtė
Gabriel Eilertsen
Claes Lundström
8
56
0
14 Aug 2020
Deep Learning in Protein Structural Modeling and Design
Wenhao Gao
S. Mahajan
Jeremias Sulam
Jeffrey J. Gray
29
159
0
16 Jul 2020
A simple defense against adversarial attacks on heatmap explanations
Laura Rieger
Lars Kai Hansen
FAtt
AAML
25
37
0
13 Jul 2020
Drug discovery with explainable artificial intelligence
José Jiménez-Luna
F. Grisoni
G. Schneider
30
625
0
01 Jul 2020
Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency
Robert Geirhos
Kristof Meding
Felix Wichmann
19
116
0
30 Jun 2020
Directional convergence and alignment in deep learning
Ziwei Ji
Matus Telgarsky
12
162
0
11 Jun 2020
Adversarial Infidelity Learning for Model Interpretation
Jian Liang
Bing Bai
Yuren Cao
Kun Bai
Fei-Yue Wang
AAML
44
18
0
09 Jun 2020
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Thomas Schnake
Oliver Eberle
Jonas Lederer
Shinichi Nakajima
Kristof T. Schütt
Klaus-Robert Muller
G. Montavon
32
215
0
05 Jun 2020
Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps
Tobias Huber
Katharina Weitz
Elisabeth André
Ofra Amir
FAtt
21
64
0
18 May 2020
Explaining AI-based Decision Support Systems using Concept Localization Maps
Adriano Lucieri
Muhammad Naseer Bajwa
Andreas Dengel
Sheraz Ahmed
27
26
0
04 May 2020
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
33
218
0
01 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
38
370
0
30 Apr 2020
Generating Fact Checking Explanations
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
17
188
0
13 Apr 2020
TSInsight: A local-global attribution framework for interpretability in time-series data
Shoaib Ahmed Siddiqui
Dominique Mercier
Andreas Dengel
Sheraz Ahmed
FAtt
AI4TS
11
12
0
06 Apr 2020
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
L. Arras
Ahmed Osman
Wojciech Samek
XAI
AAML
21
150
0
16 Mar 2020
Measuring and improving the quality of visual explanations
Agnieszka Grabska-Barwiñska
XAI
FAtt
16
3
0
14 Mar 2020
IROF: a low resource evaluation metric for explanation methods
Laura Rieger
Lars Kai Hansen
20
55
0
09 Mar 2020
Causal Interpretability for Machine Learning -- Problems, Methods and Evaluation
Raha Moraffah
Mansooreh Karami
Ruocheng Guo
A. Raglin
Huan Liu
CML
ELM
XAI
27
213
0
09 Mar 2020
Gradient-Adjusted Neuron Activation Profiles for Comprehensive Introspection of Convolutional Speech Recognition Models
A. Krug
Sebastian Stober
19
0
0
19 Feb 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
30
143
0
10 Feb 2020
Concept Whitening for Interpretable Image Recognition
Zhi Chen
Yijie Bei
Cynthia Rudin
FAtt
31
314
0
05 Feb 2020
Deceptive AI Explanations: Creation and Detection
Johannes Schneider
Christian Meske
Michalis Vlachos
14
28
0
21 Jan 2020
GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks
Q. Huang
M. Yamada
Yuan Tian
Dinesh Singh
Dawei Yin
Yi-Ju Chang
FAtt
37
344
0
17 Jan 2020
Making deep neural networks right for the right scientific reasons by interacting with their explanations
P. Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Xiaoting Shao
Hans-Georg Luigs
Anne-Katrin Mahlein
Kristian Kersting
32
207
0
15 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
38
300
0
08 Jan 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
13
132
0
20 Dec 2019
On the Explanation of Machine Learning Predictions in Clinical Gait Analysis
D. Slijepcevic
Fabian Horst
Sebastian Lapuschkin
Anna-Maria Raberger
Matthias Zeppelzauer
Wojciech Samek
C. Breiteneder
W. Schöllhorn
B. Horsak
36
50
0
16 Dec 2019
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
Patrick Schwab
W. Karlen
FAtt
CML
34
205
0
27 Oct 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
S. Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
37
6,110
0
22 Oct 2019
Semantics for Global and Local Interpretation of Deep Neural Networks
Jindong Gu
Volker Tresp
AI4CE
22
14
0
21 Oct 2019
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Ruth C. Fong
Mandela Patrick
Andrea Vedaldi
AAML
25
411
0
18 Oct 2019
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
FAtt
AAML
23
59
0
04 Oct 2019
Visual Explanation for Deep Metric Learning
Sijie Zhu
Taojiannan Yang
Cheng Chen
FAtt
24
33
0
27 Sep 2019
Deep Weakly-Supervised Learning Methods for Classification and Localization in Histology Images: A Survey
Jérôme Rony
Soufiane Belharbi
Jose Dolz
Ismail Ben Ayed
Luke McCaffrey
Eric Granger
25
70
0
08 Sep 2019
Saccader: Improving Accuracy of Hard Attention Models for Vision
Gamaleldin F. Elsayed
Simon Kornblith
Quoc V. Le
VLM
29
71
0
20 Aug 2019
Visual Interaction with Deep Learning Models through Collaborative Semantic Inference
Sebastian Gehrmann
Hendrik Strobelt
Robert Krüger
Hanspeter Pfister
Alexander M. Rush
HAI
21
57
0
24 Jul 2019
AlphaStock: A Buying-Winners-and-Selling-Losers Investment Strategy using Interpretable Deep Reinforcement Attention Networks
Jingyuan Wang
Yang Zhang
Ke Tang
Junjie Wu
Zhang Xiong
AIFin
16
119
0
24 Jul 2019
Graph Neural Network for Interpreting Task-fMRI Biomarkers
Xiaoxiao Li
Nicha Dvornek
Yuan Zhou
Juntang Zhuang
P. Ventola
James S. Duncan
30
102
0
02 Jul 2019
Incorporating Priors with Feature Attribution on Text Classification
Frederick Liu
Besim Avci
FAtt
FaML
31
120
0
19 Jun 2019
The Secrets of Machine Learning: Ten Things You Wish You Had Known Earlier to be More Effective at Data Analysis
Cynthia Rudin
David Carlson
HAI
17
34
0
04 Jun 2019
Adversarial Robustness as a Prior for Learned Representations
Logan Engstrom
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Brandon Tran
A. Madry
OOD
AAML
19
63
0
03 Jun 2019
Certifiably Robust Interpretation in Deep Learning
Alexander Levine
Sahil Singla
S. Feizi
FAtt
AAML
23
63
0
28 May 2019
Interpreting Adversarially Trained Convolutional Neural Networks
Tianyuan Zhang
Zhanxing Zhu
AAML
GAN
FAtt
28
157
0
23 May 2019
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
19
5
0
19 May 2019
Previous
1
2
3
4
5
6
7
8
Next