Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.01933
Cited By
A Survey Of Methods For Explaining Black Box Models
6 February 2018
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Survey Of Methods For Explaining Black Box Models"
50 / 419 papers shown
Title
What does LIME really see in images?
Damien Garreau
Dina Mardaoui
FAtt
19
38
0
11 Feb 2021
EUCA: the End-User-Centered Explainable AI Framework
Weina Jin
Jianyu Fan
D. Gromala
Philippe Pasquier
Ghassan Hamarneh
40
24
0
04 Feb 2021
Designing AI for Trust and Collaboration in Time-Constrained Medical Decisions: A Sociotechnical Lens
Maia L. Jacobs
Jeffrey He
Melanie F. Pradier
Barbara D. Lam
Andrew C Ahn
T. McCoy
R. Perlis
Finale Doshi-Velez
Krzysztof Z. Gajos
49
145
0
01 Feb 2021
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
42
170
0
13 Jan 2021
Weighted defeasible knowledge bases and a multipreference semantics for a deep neural network model
Laura Giordano
Daniele Theseider Dupré
36
35
0
24 Dec 2020
Explaining Black-box Models for Biomedical Text Classification
M. Moradi
Matthias Samwald
39
21
0
20 Dec 2020
XAI-P-T: A Brief Review of Explainable Artificial Intelligence from Practice to Theory
Nazanin Fouladgar
Kary Främling
XAI
15
4
0
17 Dec 2020
Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges
Kashif Ahmad
Majdi Maabreh
M. Ghaly
Khalil Khan
Junaid Qadir
Ala I. Al-Fuqaha
27
142
0
14 Dec 2020
Explanation from Specification
Harish Naik
Gyorgy Turán
XAI
27
0
0
13 Dec 2020
Demystifying Deep Neural Networks Through Interpretation: A Survey
Giang Dao
Minwoo Lee
FaML
FAtt
16
1
0
13 Dec 2020
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
24
18
0
10 Dec 2020
Machine Learning for Cataract Classification and Grading on Ophthalmic Imaging Modalities: A Survey
Xiaoqin Zhang
Yan Hu
Zunjie Xiao
Jiansheng Fang
Risa Higashita
Jiang-Dong Liu
48
41
0
09 Dec 2020
Methodology for Mining, Discovering and Analyzing Semantic Human Mobility Behaviors
Clément Moreau
T. Devogele
Laurent Étienne
Verónika Peralta
Cyril de Runz
14
1
0
08 Dec 2020
Neural Prototype Trees for Interpretable Fine-grained Image Recognition
Meike Nauta
Ron van Bree
C. Seifert
80
262
0
03 Dec 2020
Deep Gravity: enhancing mobility flows generation with deep neural networks and geographic information
F. Simini
Gianni Barlacchi
Massimilano Luca
Luca Pappalardo
HAI
21
174
0
01 Dec 2020
TimeSHAP: Explaining Recurrent Models through Sequence Perturbations
João Bento
Pedro Saleiro
André F. Cruz
Mário A. T. Figueiredo
P. Bizarro
FAtt
AI4TS
24
88
0
30 Nov 2020
Explaining by Removing: A Unified Framework for Model Explanation
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
39
241
0
21 Nov 2020
Interpretable collaborative data analysis on distributed data
A. Imakura
Hiroaki Inaba
Yukihiko Okada
Tetsuya Sakurai
FedML
19
26
0
09 Nov 2020
Feature Removal Is a Unifying Principle for Model Explanation Methods
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
33
33
0
06 Nov 2020
This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition
Meike Nauta
Annemarie Jutte
Jesper C. Provoost
C. Seifert
FAtt
22
65
0
05 Nov 2020
GPUTreeShap: Massively Parallel Exact Calculation of SHAP Scores for Tree Ensembles
Rory Mitchell
E. Frank
G. Holmes
14
55
0
27 Oct 2020
Abduction and Argumentation for Explainable Machine Learning: A Position Survey
A. Kakas
Loizos Michael
9
11
0
24 Oct 2020
Model Interpretability through the Lens of Computational Complexity
Pablo Barceló
Mikaël Monet
Jorge A. Pérez
Bernardo Subercaseaux
126
94
0
23 Oct 2020
Deep Reinforcement Learning with Stacked Hierarchical Attention for Text-based Games
Yunqiu Xu
Meng Fang
Ling-Hao Chen
Yali Du
Qiufeng Wang
Chengqi Zhang
OffRL
25
44
0
22 Oct 2020
On Explaining Decision Trees
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
24
85
0
21 Oct 2020
A Survey on Deep Learning and Explainability for Automatic Report Generation from Medical Images
Pablo Messina
Pablo Pino
Denis Parra
Alvaro Soto
Cecilia Besa
S. Uribe
Marcelo andía
C. Tejos
Claudia Prieto
Daniel Capurro
MedIm
36
62
0
20 Oct 2020
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
AI4TS
AI4CE
20
397
0
19 Oct 2020
Interpretable Machine Learning with an Ensemble of Gradient Boosting Machines
A. Konstantinov
Lev V. Utkin
FedML
AI4CE
10
138
0
14 Oct 2020
A Series of Unfortunate Counterfactual Events: the Role of Time in Counterfactual Explanations
Andrea Ferrario
M. Loi
25
5
0
09 Oct 2020
Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision
Max Glockner
Ivan Habernal
Iryna Gurevych
LRM
27
25
0
07 Oct 2020
A Survey on Explainability in Machine Reading Comprehension
Mokanarangan Thayaparan
Marco Valentino
André Freitas
FaML
12
50
0
01 Oct 2020
Distillation of Weighted Automata from Recurrent Neural Networks using a Spectral Approach
Rémi Eyraud
Stéphane Ayache
24
16
0
28 Sep 2020
Local Post-Hoc Explanations for Predictive Process Monitoring in Manufacturing
Nijat Mehdiyev
Peter Fettke
14
11
0
22 Sep 2020
Machine Guides, Human Supervises: Interactive Learning with Global Explanations
Teodora Popordanoska
Mohit Kumar
Stefano Teso
21
21
0
21 Sep 2020
Explainable boosted linear regression for time series forecasting
Igor Ilic
Berk Görgülü
Mucahit Cevik
M. Baydogan
AI4TS
8
62
0
18 Sep 2020
Evaluation of Local Explanation Methods for Multivariate Time Series Forecasting
Ozan Ozyegen
Igor Ilic
Mucahit Cevik
FAtt
AI4TS
16
2
0
18 Sep 2020
Better Model Selection with a new Definition of Feature Importance
Fan Fang
Carmine Ventre
Lingbo Li
Leslie Kanthan
Fan Wu
Michail Basios
FAtt
40
5
0
16 Sep 2020
On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning
Eoin M. Kenny
Mark T. Keane
12
99
0
10 Sep 2020
Explainable Artificial Intelligence for Process Mining: A General Overview and Application of a Novel Local Explanation Approach for Predictive Process Monitoring
Nijat Mehdiyev
Peter Fettke
AI4TS
25
55
0
04 Sep 2020
Model extraction from counterfactual explanations
Ulrich Aïvodji
Alexandre Bolot
Sébastien Gambs
MIACV
MLAU
30
51
0
03 Sep 2020
Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay
Sasha Rubin
Thomas Gerspacher
Martin C. Cooper
Alexey Ignatiev
Nina Narodytska
FAtt
27
59
0
13 Aug 2020
Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of CNNs
Ruigang Fu
Qingyong Hu
Xiaohu Dong
Yulan Guo
Yinghui Gao
Biao Li
FAtt
24
266
0
05 Aug 2020
Explainable Predictive Process Monitoring
Musabir Musabayli
F. Maggi
Williams Rizzi
Josep Carmona
Chiara Di Francescomarino
14
60
0
04 Aug 2020
Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance
Mattia Carletti
M. Terzi
Gian Antonio Susto
36
42
0
21 Jul 2020
When stakes are high: balancing accuracy and transparency with Model-Agnostic Interpretable Data-driven suRRogates
Roel Henckaerts
Katrien Antonio
Marie-Pier Côté
23
3
0
14 Jul 2020
Editable AI: Mixed Human-AI Authoring of Code Patterns
Kartik Chugh
Andrea Y. Solis
Thomas D. Latoza
9
3
0
12 Jul 2020
Drug discovery with explainable artificial intelligence
José Jiménez-Luna
F. Grisoni
G. Schneider
30
625
0
01 Jul 2020
Counterfactual explanation of machine learning survival models
M. Kovalev
Lev V. Utkin
CML
OffRL
27
19
0
26 Jun 2020
Generative causal explanations of black-box classifiers
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
30
73
0
24 Jun 2020
Interpretable Deep Models for Cardiac Resynchronisation Therapy Response Prediction
Esther Puyol-Antón
Chong Chen
J. Clough
B. Ruijsink
B. Sidhu
...
M. Elliott
Vishal S. Mehta
Daniel Rueckert
C. Rinaldi
A. King
19
32
0
24 Jun 2020
Previous
1
2
3
4
5
6
7
8
9
Next