Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1606.05386
Cited By
Model-Agnostic Interpretability of Machine Learning
16 June 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Model-Agnostic Interpretability of Machine Learning"
50 / 118 papers shown
Title
Interpreting Embedding Spaces by Conceptualization
Adi Simhi
Shaul Markovitch
29
5
0
22 Aug 2022
Data Science and Machine Learning in Education
G. Benelli
Thomas Y. Chen
Javier Mauricio Duarte
Matthew Feickert
Matthew Graham
...
K. Terao
S. Thais
A. Roy
J. Vlimant
G. Chachamis
AI4CE
30
5
0
19 Jul 2022
How Platform-User Power Relations Shape Algorithmic Accountability: A Case Study of Instant Loan Platforms and Financially Stressed Users in India
Divya Ramesh
Vaishnav Kameswaran
Ding-wen Wang
Nithya Sambasivan
30
35
0
11 May 2022
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
45
78
0
06 May 2022
Should I Follow AI-based Advice? Measuring Appropriate Reliance in Human-AI Decision-Making
Max Schemmer
Patrick Hemmer
Niklas Kühl
Carina Benz
G. Satzger
22
56
0
14 Apr 2022
EEG based Emotion Recognition: A Tutorial and Review
Xiang Li
Yazhou Zhang
Prayag Tiwari
D. Song
Bin Hu
Meihong Yang
Zhigang Zhao
Neeraj Kumar
Pekka Marttinen
25
249
0
16 Mar 2022
Counterfactual Explanations for Predictive Business Process Monitoring
Tsung-Hao Huang
Andreas Metzger
Klaus Pohl
32
19
0
24 Feb 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
194
186
0
03 Feb 2022
Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism
Siqi Miao
Miaoyuan Liu
Pan Li
18
197
0
31 Jan 2022
Measuring Attribution in Natural Language Generation Models
Hannah Rashkin
Vitaly Nikolaev
Matthew Lamm
Lora Aroyo
Michael Collins
Dipanjan Das
Slav Petrov
Gaurav Singh Tomar
Iulia Turc
David Reitter
39
174
0
23 Dec 2021
Why Are You Weird? Infusing Interpretability in Isolation Forest for Anomaly Detection
Nirmal Sobha Kartha
Clément Gautrais
Vincent Vercruyssen
19
6
0
13 Dec 2021
Explainable Deep Learning in Healthcare: A Methodological Survey from an Attribution View
Di Jin
Elena Sergeeva
W. Weng
Geeticka Chauhan
Peter Szolovits
OOD
56
55
0
05 Dec 2021
Decorrelated Variable Importance
I. Verdinelli
Larry A. Wasserman
FAtt
17
18
0
21 Nov 2021
Explaining Deep Reinforcement Learning Agents In The Atari Domain through a Surrogate Model
Alexander Sieusahai
Matthew J. Guzdial
35
13
0
07 Oct 2021
Discriminative Attribution from Counterfactuals
N. Eckstein
A. S. Bates
G. Jefferis
Jan Funke
FAtt
CML
27
1
0
28 Sep 2021
AdjointNet: Constraining machine learning models with physics-based codes
S. Karra
B. Ahmmed
M. Mudunuru
AI4CE
PINN
OOD
24
4
0
08 Sep 2021
Diagnostics-Guided Explanation Generation
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
LRM
FAtt
43
6
0
08 Sep 2021
Logic Explained Networks
Gabriele Ciravegna
Pietro Barbiero
Francesco Giannini
Marco Gori
Pietro Lio
Marco Maggini
S. Melacci
42
69
0
11 Aug 2021
Levels of explainable artificial intelligence for human-aligned conversational explanations
Richard Dazeley
Peter Vamplew
Cameron Foale
Charlotte Young
Sunil Aryal
F. Cruz
30
90
0
07 Jul 2021
Entropy-based Logic Explanations of Neural Networks
Pietro Barbiero
Gabriele Ciravegna
Francesco Giannini
Pietro Lio
Marco Gori
S. Melacci
FAtt
XAI
30
78
0
12 Jun 2021
Explaining Time Series Predictions with Dynamic Masks
Jonathan Crabbé
M. Schaar
FAtt
AI4TS
23
80
0
09 Jun 2021
On Efficiently Explaining Graph-Based Classifiers
Xuanxiang Huang
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
41
37
0
02 Jun 2021
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
E. Amparore
Alan Perotti
P. Bajardi
FAtt
33
68
0
01 Jun 2021
Information-theoretic Evolution of Model Agnostic Global Explanations
Sukriti Verma
Nikaash Puri
Piyush B. Gupta
Balaji Krishnamurthy
FAtt
29
0
0
14 May 2021
Machine learning approach to dynamic risk modeling of mortality in COVID-19: a UK Biobank study
M. Dabbah
Angus B. Reed
A. Booth
A. Yassaee
A. Despotovic
...
Emily Binning
M. Aral
D. Plans
A. Labrique
D. Mohan
14
17
0
19 Apr 2021
Editing Factual Knowledge in Language Models
Nicola De Cao
Wilker Aziz
Ivan Titov
KELM
68
478
0
16 Apr 2021
Generative Causal Explanations for Graph Neural Networks
Wanyu Lin
Hao Lan
Baochun Li
CML
36
173
0
14 Apr 2021
Individual Explanations in Machine Learning Models: A Case Study on Poverty Estimation
Alfredo Carrillo
Luis F. Cantú
Luis Tejerina
Alejandro Noriega
13
2
0
09 Apr 2021
Explainable AI by BAPC -- Before and After correction Parameter Comparison
F. Sobieczky
Manuela Geiß
16
1
0
12 Mar 2021
Evaluating Robustness of Counterfactual Explanations
André Artelt
Valerie Vaquet
Riza Velioglu
Fabian Hinder
Johannes Brinkrolf
M. Schilling
Barbara Hammer
14
46
0
03 Mar 2021
Connecting Interpretability and Robustness in Decision Trees through Separation
Michal Moshkovitz
Yao-Yuan Yang
Kamalika Chaudhuri
33
22
0
14 Feb 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
34
126
0
24 Jan 2021
Dissonance Between Human and Machine Understanding
Zijian Zhang
Jaspreet Singh
U. Gadiraju
Avishek Anand
59
74
0
18 Jan 2021
Explainable AI for Software Engineering
Chakkrit Tantithamthavorn
Jirayus Jiarpakdee
J. Grundy
29
58
0
03 Dec 2020
TimeSHAP: Explaining Recurrent Models through Sequence Perturbations
João Bento
Pedro Saleiro
André F. Cruz
Mário A. T. Figueiredo
P. Bizarro
FAtt
AI4TS
24
88
0
30 Nov 2020
On Explaining Decision Trees
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
24
85
0
21 Oct 2020
Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
M. Schlichtkrull
Nicola De Cao
Ivan Titov
AI4CE
36
214
0
01 Oct 2020
Interpretable Machine Learning for COVID-19: An Empirical Study on Severity Prediction Task
Han-Ching Wu
Wenjie Ruan
Jiangtao Wang
Dingchang Zheng
Bei Liu
...
Xiangfei Chai
Jian Chen
Kunwei Li
Shaolin Li
A. Helal
32
25
0
30 Sep 2020
A Diagnostic Study of Explainability Techniques for Text Classification
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
XAI
FAtt
22
220
0
25 Sep 2020
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Dylan Slack
Sophie Hilgard
Sameer Singh
Himabindu Lakkaraju
FAtt
29
162
0
11 Aug 2020
Causal Explanations of Image Misclassifications
Yan Min
Miles K. Bennett
CML
16
0
0
28 Jun 2020
Location, location, location: Satellite image-based real-estate appraisal
Jan-Peter Kucklick
Oliver Müller
28
5
0
04 Jun 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
49
371
0
30 Apr 2020
Sequential Interpretability: Methods, Applications, and Future Direction for Understanding Deep Learning Models in the Context of Sequential Data
B. Shickel
Parisa Rashidi
AI4TS
33
17
0
27 Apr 2020
An Extension of LIME with Improvement of Interpretability and Fidelity
Sheng Shi
Yangzhou Du
Wei Fan
FAtt
16
8
0
26 Apr 2020
Causal Interpretability for Machine Learning -- Problems, Methods and Evaluation
Raha Moraffah
Mansooreh Karami
Ruocheng Guo
A. Raglin
Huan Liu
CML
ELM
XAI
32
213
0
09 Mar 2020
Interpretability of machine learning based prediction models in healthcare
Gregor Stiglic
Primož Kocbek
Nino Fijačko
Marinka Zitnik
K. Verbert
Leona Cilar
AI4CE
35
374
0
20 Feb 2020
A Modified Perturbed Sampling Method for Local Interpretable Model-agnostic Explanation
Sheng Shi
Xinfeng Zhang
Wei Fan
FAtt
19
28
0
18 Feb 2020
Convex Density Constraints for Computing Plausible Counterfactual Explanations
André Artelt
Barbara Hammer
19
47
0
12 Feb 2020
Keeping Community in the Loop: Understanding Wikipedia Stakeholder Values for Machine Learning-Based Systems
C. E. Smith
Bowen Yu
Anjali Srivastava
Aaron L Halfaker
Loren G. Terveen
Haiyi Zhu
KELM
21
69
0
14 Jan 2020
Previous
1
2
3
Next