ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.06560
  4. Cited By
Explaining Explanations to Society

Explaining Explanations to Society

19 January 2019
Leilani H. Gilpin
Cecilia Testart
Nathaniel Fruchter
Julius Adebayo
    XAI
ArXivPDFHTML

Papers citing "Explaining Explanations to Society"

9 / 9 papers shown
Title
Mapping the Potential of Explainable AI for Fairness Along the AI
  Lifecycle
Mapping the Potential of Explainable AI for Fairness Along the AI Lifecycle
Luca Deck
Astrid Schomacker
Timo Speith
Jakob Schöffer
Lena Kästner
Niklas Kühl
41
4
0
29 Apr 2024
Appropriate Reliance on AI Advice: Conceptualization and the Effect of
  Explanations
Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations
Max Schemmer
Niklas Kühl
Carina Benz
Andrea Bartos
G. Satzger
19
97
0
04 Feb 2023
A Review on Explainability in Multimodal Deep Neural Nets
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
23
137
0
17 May 2021
If Only We Had Better Counterfactual Explanations: Five Key Deficits to
  Rectify in the Evaluation of Counterfactual XAI Techniques
If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques
Mark T. Keane
Eoin M. Kenny
Eoin Delaney
Barry Smyth
CML
13
146
0
26 Feb 2021
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
  Stakeholder Perspective on XAI and a Conceptual Model Guiding
  Interdisciplinary XAI Research
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
56
415
0
15 Feb 2021
Explainable AI for Interpretable Credit Scoring
Explainable AI for Interpretable Credit Scoring
Lara Marie Demajo
Vince Vella
A. Dingli
35
35
0
03 Dec 2020
Play MNIST For Me! User Studies on the Effects of Post-Hoc,
  Example-Based Explanations & Error Rates on Debugging a Deep Learning,
  Black-Box Classifier
Play MNIST For Me! User Studies on the Effects of Post-Hoc, Example-Based Explanations & Error Rates on Debugging a Deep Learning, Black-Box Classifier
Courtney Ford
Eoin M. Kenny
Mark T. Keane
17
6
0
10 Sep 2020
From Shallow to Deep Interactions Between Knowledge Representation,
  Reasoning and Machine Learning (Kay R. Amel group)
From Shallow to Deep Interactions Between Knowledge Representation, Reasoning and Machine Learning (Kay R. Amel group)
Zied Bouraoui
Antoine Cornuéjols
Thierry Denoeux
Sebastien Destercke
Didier Dubois
...
Jérôme Mengin
H. Prade
Steven Schockaert
M. Serrurier
Christel Vrain
18
13
0
13 Dec 2019
The Twin-System Approach as One Generic Solution for XAI: An Overview of
  ANN-CBR Twins for Explaining Deep Learning
The Twin-System Approach as One Generic Solution for XAI: An Overview of ANN-CBR Twins for Explaining Deep Learning
Mark T. Keane
Eoin M. Kenny
10
13
0
20 May 2019
1