ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.05100
  4. Cited By
Explainability Fact Sheets: A Framework for Systematic Assessment of
  Explainable Approaches

Explainability Fact Sheets: A Framework for Systematic Assessment of Explainable Approaches

11 December 2019
Kacper Sokol
Peter A. Flach
    XAI
ArXiv (abs)PDFHTML

Papers citing "Explainability Fact Sheets: A Framework for Systematic Assessment of Explainable Approaches"

27 / 27 papers shown
Title
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
Greta Warren
Irina Shklovski
Isabelle Augenstein
OffRL
152
9
0
13 Feb 2025
Explaining a probabilistic prediction on the simplex with Shapley compositions
Explaining a probabilistic prediction on the simplex with Shapley compositions
Paul-Gauthier Noé
Miquel Perelló Nieto
J. Bonastre
Peter Flach
TDIFAtt
68
0
0
02 Aug 2024
Navigating Explanatory Multiverse Through Counterfactual Path Geometry
Navigating Explanatory Multiverse Through Counterfactual Path Geometry
Kacper Sokol
E. Small
Yueqing Xuan
95
6
0
05 Jun 2023
bLIMEy: Surrogate Prediction Explanations Beyond LIME
bLIMEy: Surrogate Prediction Explanations Beyond LIME
Kacper Sokol
Alexander Hepburn
Raúl Santos-Rodríguez
Peter A. Flach
FAtt
121
38
0
29 Oct 2019
FACE: Feasible and Actionable Counterfactual Explanations
FACE: Feasible and Actionable Counterfactual Explanations
Rafael Poyiadzi
Kacper Sokol
Raúl Santos-Rodríguez
T. D. Bie
Peter A. Flach
73
369
0
20 Sep 2019
The Dangers of Post-hoc Interpretability: Unjustified Counterfactual
  Explanations
The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
X. Renard
Marcin Detyniecki
61
198
0
22 Jul 2019
Towards a Characterization of Explainable Systems
Towards a Characterization of Explainable Systems
Dimitri Bohlender
Maximilian A. Köhl
33
12
0
31 Jan 2019
Model Cards for Model Reporting
Model Cards for Model Reporting
Margaret Mitchell
Simone Wu
Andrew Zaldivar
Parker Barnes
Lucy Vasserman
Ben Hutchinson
Elena Spitzer
Inioluwa Deborah Raji
Timnit Gebru
130
1,903
0
05 Oct 2018
Stakeholders in Explainable AI
Stakeholders in Explainable AI
Alun D. Preece
Daniel Harborne
Dave Braines
Richard J. Tomsett
Supriyo Chakraborty
45
157
0
29 Sep 2018
Interpretable to Whom? A Role-based Model for Analyzing Interpretable
  Machine Learning Systems
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
Richard J. Tomsett
Dave Braines
Daniel Harborne
Alun D. Preece
Supriyo Chakraborty
FaML
138
166
0
20 Jun 2018
Defining Locality for Surrogates in Post-hoc Interpretablity
Defining Locality for Surrogates in Post-hoc Interpretablity
Thibault Laugel
X. Renard
Marie-Jeanne Lesot
Christophe Marsala
Marcin Detyniecki
FAtt
83
80
0
19 Jun 2018
A Nutritional Label for Rankings
A Nutritional Label for Rankings
Ke Yang
Julia Stoyanovich
Abolfazl Asudeh
Bill Howe
H. V. Jagadish
G. Miklau
44
108
0
21 Apr 2018
Datasheets for Datasets
Datasheets for Datasets
Timnit Gebru
Jamie Morgenstern
Briana Vecchione
Jennifer Wortman Vaughan
Hanna M. Wallach
Hal Daumé
Kate Crawford
266
2,194
0
23 Mar 2018
The Challenge of Crafting Intelligible Intelligence
The Challenge of Crafting Intelligible Intelligence
Daniel S. Weld
Gagan Bansal
56
244
0
09 Mar 2018
Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to
  Stop Worrying and Love the Social and Behavioural Sciences
Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences
Tim Miller
Piers Howe
L. Sonenberg
AI4TSSyDa
63
373
0
02 Dec 2017
The Promise and Peril of Human Evaluation for Model Interpretability
Bernease Herman
66
144
0
20 Nov 2017
Counterfactual Explanations without Opening the Black Box: Automated
  Decisions and the GDPR
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
127
2,361
0
01 Nov 2017
Interpretable & Explorable Approximations of Black Box Models
Interpretable & Explorable Approximations of Black Box Models
Himabindu Lakkaraju
Ece Kamar
R. Caruana
J. Leskovec
FAtt
71
254
0
04 Jul 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
250
4,273
0
22 Jun 2017
Interpretable Predictions of Tree-based Ensembles via Actionable Feature
  Tweaking
Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking
Gabriele Tolomei
Fabrizio Silvestri
Andrew Haines
M. Lalmas
61
208
0
20 Jun 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
216
2,905
0
14 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAIFaML
405
3,809
0
28 Feb 2017
Using Visual Analytics to Interpret Predictive Machine Learning Models
Using Visual Analytics to Interpret Predictive Machine Learning Models
Josua Krause
Adam Perer
E. Bertini
HAI
59
65
0
17 Jun 2016
Model-Agnostic Interpretability of Machine Learning
Model-Agnostic Interpretability of Machine Learning
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
86
839
0
16 Jun 2016
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
183
3,706
0
10 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,027
0
16 Feb 2016
The Bayesian Case Model: A Generative Approach for Case-Based Reasoning
  and Prototype Classification
The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification
Been Kim
Cynthia Rudin
J. Shah
70
321
0
03 Mar 2015
1