Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.04592
Cited By
Interpretable machine learning: definitions, methods, and applications
14 January 2019
W. James Murdoch
Chandan Singh
Karl Kumbier
R. Abbasi-Asl
Bin-Xia Yu
XAI
HAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Interpretable machine learning: definitions, methods, and applications"
50 / 329 papers shown
Title
Explainable artificial intelligence (XAI) in deep learning-based medical image analysis
Bas H. M. van der Velden
Hugo J. Kuijf
K. Gilhuijs
M. Viergever
XAI
29
636
0
22 Jul 2021
Adaptive wavelet distillation from neural networks through interpretations
Wooseok Ha
Chandan Singh
F. Lanusse
S. Upadhyayula
Bin-Xia Yu
14
40
0
19 Jul 2021
Nearly-Tight and Oblivious Algorithms for Explainable Clustering
Buddhima Gamlath
Xinrui Jia
Adam Polak
O. Svensson
23
23
0
30 Jun 2021
Near-Optimal Explainable
k
k
k
-Means for All Dimensions
Moses Charikar
Lunjia Hu
28
18
0
29 Jun 2021
Machine learning in the social and health sciences
A. Leist
Matthias Klee
Jung Hyun Kim
D. Rehkopf
Stéphane P. A. Bordas
Graciela Muniz-Terrera
Sara Wade
AI4CE
31
4
0
20 Jun 2021
Rational Shapley Values
David S. Watson
23
20
0
18 Jun 2021
Multi-Modal Prototype Learning for Interpretable Multivariable Time Series Classification
Gaurav R. Ghosal
R. Abbasi-Asl
AI4TS
9
7
0
17 Jun 2021
Neural Networks for Partially Linear Quantile Regression
Qixian Zhong
Jane-ling Wang
13
13
0
11 Jun 2021
Exploiting auto-encoders and segmentation methods for middle-level explanations of image classification systems
Andrea Apicella
Salvatore Giugliano
Francesco Isgrò
R. Prevete
6
18
0
09 Jun 2021
Understanding Neural Code Intelligence Through Program Simplification
Md Rafiqul Islam Rabin
Vincent J. Hellendoorn
Mohammad Amin Alipour
AAML
49
58
0
07 Jun 2021
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
E. Amparore
Alan Perotti
P. Bajardi
FAtt
17
68
0
01 Jun 2021
Towards Transparent Application of Machine Learning in Video Processing
L. Murn
Marc Górriz Blanch
M. Santamaría
F. Rivera
M. Mrak
15
1
0
26 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
29
184
0
15 May 2021
Conjunction Data Messages behave as a Poisson Process
Francisco M. Caldas
Cláudia Soares
Cláudia Nunes
Marta Guimarães
Mariana Filipe
R. Ventura
14
1
0
14 May 2021
Robust Sample Weighting to Facilitate Individualized Treatment Rule Learning for a Target Population
Rui Chen
J. Huling
Guanhua Chen
Menggang Yu
CML
8
1
0
03 May 2021
Explaining a Series of Models by Propagating Shapley Values
Hugh Chen
Scott M. Lundberg
Su-In Lee
TDI
FAtt
22
123
0
30 Apr 2021
What Makes a Scientific Paper be Accepted for Publication?
Panagiotis Fytas
Georgios Rizos
Lucia Specia
16
10
0
14 Apr 2021
Transforming Feature Space to Interpret Machine Learning Models
A. Brenning
FAtt
42
9
0
09 Apr 2021
Robust Semantic Interpretability: Revisiting Concept Activation Vectors
J. Pfau
A. Young
Jerome Wei
Maria L. Wei
Michael J. Keiser
FAtt
31
14
0
06 Apr 2021
Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice
David S. Watson
Limor Gultchin
Ankur Taly
Luciano Floridi
20
62
0
27 Mar 2021
Local Interpretations for Explainable Natural Language Processing: A Survey
Siwen Luo
Hamish Ivison
S. Han
Josiah Poon
MILM
33
48
0
20 Mar 2021
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond
Xuhong Li
Haoyi Xiong
Xingjian Li
Xuanyu Wu
Xiao Zhang
Ji Liu
Jiang Bian
Dejing Dou
AAML
FaML
XAI
HAI
23
317
0
19 Mar 2021
Towards an Open Global Air Quality Monitoring Platform to Assess Children's Exposure to Air Pollutants in the Light of COVID-19 Lockdowns
Christina Last
Prithviraj Pramanik
N. Saini
Akash Majety
Do-Hyung Kim
M. García-Herranz
S. Majumdar
8
1
0
17 Mar 2021
A new interpretable unsupervised anomaly detection method based on residual explanation
David F. N. Oliveira
L. Vismari
A. M. Nascimento
J. R. de Almeida
P. Cugnasca
J. Camargo
L. Almeida
Rafael Gripp
Marcelo M. Neves
AAML
11
17
0
14 Mar 2021
Interpretable Machine Learning: Moving From Mythos to Diagnostics
Valerie Chen
Jeffrey Li
Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
32
29
0
10 Mar 2021
Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications
Yu-Liang Chou
Catarina Moreira
P. Bruza
Chun Ouyang
Joaquim A. Jorge
CML
47
176
0
07 Mar 2021
Relate and Predict: Structure-Aware Prediction with Jointly Optimized Neural DAG
Arshdeep Sekhon
Zhe Wang
Yanjun Qi
GNN
13
0
0
03 Mar 2021
Predicting Driver Fatigue in Automated Driving with Explainability
Feng Zhou
Areen Alsaid
Mike Blommer
Reates Curry
Radhakrishnan Swaminathan
D. Kochhar
W. Talamonti
L. Tijerina
FAtt
11
5
0
03 Mar 2021
Benchmarking and Survey of Explanation Methods for Black Box Models
F. Bodria
F. Giannotti
Riccardo Guidotti
Francesca Naretto
D. Pedreschi
S. Rinzivillo
XAI
33
220
0
25 Feb 2021
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAML
FAtt
21
57
0
25 Feb 2021
Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing
Sarah Wiegreffe
Ana Marasović
XAI
11
141
0
24 Feb 2021
Provable Boolean Interaction Recovery from Tree Ensemble obtained via Random Forests
Merle Behr
Yu Wang
Xiao Li
Bin-Xia Yu
14
13
0
23 Feb 2021
Believe The HiPe: Hierarchical Perturbation for Fast, Robust, and Model-Agnostic Saliency Mapping
Jessica Cooper
Ognjen Arandjelovic
David J. Harrison
AAML
9
13
0
22 Feb 2021
Robust Explanations for Private Support Vector Machines
R. Mochaourab
Sugandh Sinha
S. Greenstein
P. Papapetrou
9
2
0
07 Feb 2021
How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations
Sérgio Jesus
Catarina Belém
Vladimir Balayan
João Bento
Pedro Saleiro
P. Bizarro
João Gama
136
120
0
21 Jan 2021
Elastic Net based Feature Ranking and Selection
Shaode Yu
Haobo Chen
Hang Yu
Zhicheng Zhang
Xiaokun Liang
Wenjian Qin
Yaoqin Xie
Ping Shi
OOD
CML
23
5
0
30 Dec 2020
Unbox the Blackbox: Predict and Interpret YouTube Viewership Using Deep Learning
Jiaheng Xie
Xinyu Liu
HAI
25
10
0
21 Dec 2020
Genetic Adversarial Training of Decision Trees
Francesco Ranzato
Marco Zanella
14
14
0
21 Dec 2020
Automatic Test Suite Generation for Key-Points Detection DNNs using Many-Objective Search (Experience Paper)
Fitash Ul Haq
Donghwan Shin
Lionel C. Briand
Thomas Stifter
Jun Wang
AAML
21
19
0
11 Dec 2020
The Three Ghosts of Medical AI: Can the Black-Box Present Deliver?
Thomas P. Quinn
Stephan Jacobs
M. Senadeera
Vuong Le
S. Coghlan
25
112
0
10 Dec 2020
Understanding Interpretability by generalized distillation in Supervised Classification
Adit Agarwal
Dr. K.K. Shukla
Arjan Kuijper
Anirban Mukhopadhyay
FaML
FAtt
24
0
0
05 Dec 2020
Explainable AI for ML jet taggers using expert variables and layerwise relevance propagation
G. Agarwal
L. Hay
I. Iashvili
Benjamin Mannix
C. McLean
Margaret E. Morris
S. Rappoccio
U. Schubert
30
18
0
26 Nov 2020
Deep learning insights into cosmological structure formation
Luisa Lucie-Smith
H. Peiris
A. Pontzen
Brian D. Nord
Jeyan Thiyagalingam
24
6
0
20 Nov 2020
Impact of Accuracy on Model Interpretations
Brian Liu
Madeleine Udell
FAtt
19
9
0
17 Nov 2020
A Survey on the Explainability of Supervised Machine Learning
Nadia Burkart
Marco F. Huber
FaML
XAI
25
752
0
16 Nov 2020
Deep Interpretable Classification and Weakly-Supervised Segmentation of Histology Images via Max-Min Uncertainty
Soufiane Belharbi
Jérôme Rony
Jose Dolz
Ismail Ben Ayed
Luke McCaffrey
Eric Granger
19
52
0
14 Nov 2020
Generalized Constraints as A New Mathematical Problem in Artificial Intelligence: A Review and Perspective
Bao-Gang Hu
Hanbing Qu
AI4CE
20
1
0
12 Nov 2020
Privacy Preservation in Federated Learning: An insightful survey from the GDPR Perspective
N. Truong
Kai Sun
Siyao Wang
Florian Guitton
Yike Guo
FedML
12
9
0
10 Nov 2020
Unwrapping The Black Box of Deep ReLU Networks: Interpretability, Diagnostics, and Simplification
Agus Sudjianto
William Knauth
Rahul Singh
Zebin Yang
Aijun Zhang
FAtt
35
44
0
08 Nov 2020
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
39
7
0
23 Oct 2020
Previous
1
2
3
4
5
6
7
Next