Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.01933
Cited By
v1
v2
v3 (latest)
A Survey Of Methods For Explaining Black Box Models
6 February 2018
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"A Survey Of Methods For Explaining Black Box Models"
50 / 1,104 papers shown
Title
OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms
Giorgio Visani
Enrico Bagli
F. Chesani
FAtt
78
60
0
10 Jun 2020
Principles to Practices for Responsible AI: Closing the Gap
Daniel S. Schiff
B. Rakova
A. Ayesh
Anat Fanti
M. Lennon
89
89
0
08 Jun 2020
Model-agnostic Feature Importance and Effects with Dependent Features -- A Conditional Subgroup Approach
Christoph Molnar
Gunnar Konig
B. Bischl
Giuseppe Casalicchio
83
84
0
08 Jun 2020
Evaluation of Similarity-based Explanations
Kazuaki Hanawa
Sho Yokoi
Satoshi Hara
Kentaro Inui
XAI
34
2
0
08 Jun 2020
A Generic and Model-Agnostic Exemplar Synthetization Framework for Explainable AI
Antonio Bărbălău
Adrian Cosma
Radu Tudor Ionescu
Marius Popescu
47
9
0
06 Jun 2020
SHADOWCAST: Controllable Graph Generation
W. Tann
E. Chang
Bryan Hooi
74
2
0
06 Jun 2020
MFPP: Morphological Fragmental Perturbation Pyramid for Black-Box Model Explanations
Qing Yang
Xia Zhu
Jong-Kae Fwu
Yun Ye
Ganmei You
Yuan Zhu
AAML
51
20
0
04 Jun 2020
Explainable Artificial Intelligence: a Systematic Review
Giulia Vilone
Luca Longo
XAI
110
271
0
29 May 2020
A Performance-Explainability Framework to Benchmark Machine Learning Methods: Application to Multivariate Time Series Classifiers
Kevin Fauvel
Véronique Masson
Elisa Fromont
AI4TS
77
17
0
29 May 2020
Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI)
Mark T. Keane
Barry Smyth
CML
76
146
0
26 May 2020
Adversarial NLI for Factual Correctness in Text Summarisation Models
Mario Barrantes
Benedikt Herudek
Richard Wang
52
17
0
24 May 2020
Explainable Matrix -- Visualization for Global and Local Interpretability of Random Forest Classification Ensembles
Mário Popolin Neto
F. Paulovich
FAtt
81
89
0
08 May 2020
Contextualizing Hate Speech Classifiers with Post-hoc Explanation
Brendan Kennedy
Xisen Jin
Aida Mostafazadeh Davani
Morteza Dehghani
Xiang Ren
135
142
0
05 May 2020
Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition
Mahsan Nourani
Chiradeep Roy
Tahrima Rahman
Eric D. Ragan
Nicholas Ruozzi
Vibhav Gogate
AAML
62
18
0
05 May 2020
A robust algorithm for explaining unreliable machine learning survival models using the Kolmogorov-Smirnov bounds
M. Kovalev
Lev V. Utkin
AAML
81
32
0
05 May 2020
SurvLIME-Inf: A simplified modification of SurvLIME for explanation of machine learning survival models
Lev V. Utkin
M. Kovalev
E. Kasimov
53
10
0
05 May 2020
Post-hoc explanation of black-box classifiers using confident itemsets
M. Moradi
Matthias Samwald
139
101
0
05 May 2020
Construction and Elicitation of a Black Box Model in the Game of Bridge
V. Ventos
Daniel A. Braun
Colin Deheeger
Jean Pierre Desmoulins
Jean Baptiste Fantun
Swann Legras
Alexis Rimbaud
C. Rouveirol
H. Soldano
Solène Thépaut
32
0
0
04 May 2020
Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?
Marco Melis
Michele Scalas
Ambra Demontis
Davide Maiorca
Battista Biggio
Giorgio Giacinto
Fabio Roli
AAML
FAtt
54
28
0
04 May 2020
WT5?! Training Text-to-Text Models to Explain their Predictions
Sharan Narang
Colin Raffel
Katherine Lee
Adam Roberts
Noah Fiedel
Karishma Malkan
82
201
0
30 Apr 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
114
380
0
30 Apr 2020
Valid Explanations for Learning to Rank Models
Jaspreet Singh
Zhenye Wang
Megha Khosla
Avishek Anand
LRM
FAtt
43
8
0
29 Apr 2020
Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models
Jayaraman J. Thiagarajan
P. Sattigeri
Deepta Rajan
Bindya Venkatesh
MedIm
33
17
0
27 Apr 2020
Why an Android App is Classified as Malware? Towards Malware Classification Interpretation
Bozhi Wu
Sen Chen
Cuiyun Gao
Lingling Fan
Yang Liu
W. Wen
Michael R. Lyu
95
58
0
24 Apr 2020
Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs
Sungsoo Ray Hong
Jessica Hullman
E. Bertini
HAI
92
197
0
23 Apr 2020
Learning a Formula of Interpretability to Learn Interpretable Formulas
M. Virgolin
A. D. Lorenzo
Eric Medvet
Francesca Randone
60
35
0
23 Apr 2020
Explainable Goal-Driven Agents and Robots -- A Comprehensive Review
F. Sado
C. K. Loo
W. S. Liew
Matthias Kerzel
S. Wermter
98
52
0
21 Apr 2020
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?
Alon Jacovi
Yoav Goldberg
XAI
140
601
0
07 Apr 2020
A general framework for inference on algorithm-agnostic variable importance
B. Williamson
P. Gilbert
N. Simon
M. Carone
FAtt
CML
56
68
0
07 Apr 2020
A New Method to Compare the Interpretability of Rule-based Algorithms
Vincent Margot
G. Luta
FAtt
20
17
0
03 Apr 2020
Born-Again Tree Ensembles
Thibaut Vidal
Toni Pacheco
Maximilian Schiffer
140
54
0
24 Mar 2020
Dividing Deep Learning Model for Continuous Anomaly Detection of Inconsistent ICT Systems
Kengo Tajiri
Yasuhiro Ikeda
Yuusuke Nakano
Keishiro Watanabe
18
1
0
24 Mar 2020
Interpretable machine learning models: a physics-based view
Ion Matei
Johan de Kleer
C. Somarakis
R. Rai
John S. Baras
PINN
AI4CE
36
1
0
22 Mar 2020
Layerwise Knowledge Extraction from Deep Convolutional Networks
S. Odense
Artur Garcez
FAtt
49
10
0
19 Mar 2020
SurvLIME: A method for explaining machine learning survival models
M. Kovalev
Lev V. Utkin
E. Kasimov
292
91
0
18 Mar 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
141
83
0
17 Mar 2020
Harnessing Explanations to Bridge AI and Humans
Vivian Lai
Samuel Carton
Chenhao Tan
54
5
0
16 Mar 2020
Self-Supervised Discovering of Interpretable Features for Reinforcement Learning
Wenjie Shi
Gao Huang
Shiji Song
Zhuoyuan Wang
Tingyu Lin
Cheng Wu
SSL
76
18
0
16 Mar 2020
LIMEADE: From AI Explanations to Advice Taking
Benjamin Charles Germain Lee
Doug Downey
Kyle Lo
Daniel S. Weld
167
6
0
09 Mar 2020
ViCE: Visual Counterfactual Explanations for Machine Learning Models
Oscar Gomez
Steffen Holter
Jun Yuan
E. Bertini
AAML
112
96
0
05 Mar 2020
Robot Mindreading and the Problem of Trust
Andrés Páez
25
0
0
02 Mar 2020
A general framework for scientifically inspired explanations in AI
David Tuckey
A. Russo
Krysia Broda
33
0
0
02 Mar 2020
Testing Monotonicity of Machine Learning Models
Arnab Sharma
Heike Wehrheim
72
9
0
27 Feb 2020
Better Classifier Calibration for Small Data Sets
Alasalmi Tuomo
Jaakko Suutala
Heli Koskimäki
J. Röning
20
9
0
24 Feb 2020
Sampling for Deep Learning Model Diagnosis (Technical Report)
Parmita Mehta
S. Portillo
Magdalena Balazinska
Andrew J. Connolly
LM&MA
MLAU
26
2
0
22 Feb 2020
The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
Andrés Páez
68
197
0
22 Feb 2020
Surrogate-free machine learning-based organ dose reconstruction for pediatric abdominal radiotherapy
M. Virgolin
Z. Wang
B. Balgobind
I. V. Dijk
J. Wiersma
...
L. Zaletel
C. Rasch
A. Bel
Peter A. N. Bosman
Tanja Alderliesten
10
5
0
17 Feb 2020
AI safety: state of the field through quantitative lens
Mislav Juric
A. Sandic
Mario Brčič
93
24
0
12 Feb 2020
Convex Density Constraints for Computing Plausible Counterfactual Explanations
André Artelt
Barbara Hammer
74
47
0
12 Feb 2020
Leveraging Rationales to Improve Human Task Performance
Devleena Das
Sonia Chernova
77
50
0
11 Feb 2020
Previous
1
2
3
...
18
19
20
21
22
23
Next