ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.07817
  4. Cited By
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
  Stakeholder Perspective on XAI and a Conceptual Model Guiding
  Interdisciplinary XAI Research

What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research

15 February 2021
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
    XAI
ArXivPDFHTML

Papers citing "What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research"

50 / 105 papers shown
Title
XAI in Automated Fact-Checking? The Benefits Are Modest and There's No
  One-Explanation-Fits-All
XAI in Automated Fact-Checking? The Benefits Are Modest and There's No One-Explanation-Fits-All
Gionnieve Lim
S. Perrault
15
1
0
07 Aug 2023
Towards a Comprehensive Human-Centred Evaluation Framework for
  Explainable AI
Towards a Comprehensive Human-Centred Evaluation Framework for Explainable AI
Ivania Donoso-Guzmán
Jeroen Ooge
Denis Parra
K. Verbert
45
6
0
31 Jul 2023
A New Perspective on Evaluation Methods for Explainable Artificial
  Intelligence (XAI)
A New Perspective on Evaluation Methods for Explainable Artificial Intelligence (XAI)
Timo Speith
Markus Langer
29
12
0
26 Jul 2023
Revisiting the Performance-Explainability Trade-Off in Explainable
  Artificial Intelligence (XAI)
Revisiting the Performance-Explainability Trade-Off in Explainable Artificial Intelligence (XAI)
Barnaby Crook
Maximilian Schluter
Timo Speith
11
16
0
26 Jul 2023
Sources of Opacity in Computer Systems: Towards a Comprehensive Taxonomy
Sources of Opacity in Computer Systems: Towards a Comprehensive Taxonomy
Sara Mann
Barnaby Crook
Lena Kästner
Astrid Schomacker
Timo Speith
13
2
0
26 Jul 2023
Identifying Explanation Needs of End-users: Applying and Extending the
  XAI Question Bank
Identifying Explanation Needs of End-users: Applying and Extending the XAI Question Bank
Lars Sipos
Ulrike Schäfer
Katrin Glinka
Claudia Muller-Birn
28
7
0
18 Jul 2023
AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap
AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap
Q. V. Liao
J. Vaughan
38
158
0
02 Jun 2023
Being Right for Whose Right Reasons?
Being Right for Whose Right Reasons?
Terne Sasha Thorn Jakobsen
Laura Cabello
Anders Søgaard
29
10
0
01 Jun 2023
Applying Interdisciplinary Frameworks to Understand Algorithmic
  Decision-Making
Applying Interdisciplinary Frameworks to Understand Algorithmic Decision-Making
Timothée Schmude
Laura M. Koesten
Torsten Moller
Sebastian Tschiatschek
33
1
0
26 May 2023
The Case Against Explainability
The Case Against Explainability
Hofit Wasserman Rozen
N. Elkin-Koren
Ran Gilad-Bachrach
AILaw
ELM
28
1
0
20 May 2023
Certification Labels for Trustworthy AI: Insights From an Empirical
  Mixed-Method Study
Certification Labels for Trustworthy AI: Insights From an Empirical Mixed-Method Study
Nicolas Scharowski
Michaela Benk
S. J. Kühne
Léane Wettstein
Florian Brühlmann
32
12
0
15 May 2023
Impact Of Explainable AI On Cognitive Load: Insights From An Empirical
  Study
Impact Of Explainable AI On Cognitive Load: Insights From An Empirical Study
L. Herm
18
22
0
18 Apr 2023
Evaluating the Robustness of Interpretability Methods through
  Explanation Invariance and Equivariance
Evaluating the Robustness of Interpretability Methods through Explanation Invariance and Equivariance
Jonathan Crabbé
M. Schaar
AAML
24
6
0
13 Apr 2023
Blaming Humans and Machines: What Shapes People's Reactions to
  Algorithmic Harm
Blaming Humans and Machines: What Shapes People's Reactions to Algorithmic Harm
Gabriel Lima
Nina Grgić-Hlavca
M. Cha
28
26
0
05 Apr 2023
Distrust in (X)AI -- Measurement Artifact or Distinct Construct?
Distrust in (X)AI -- Measurement Artifact or Distinct Construct?
Nicolas Scharowski
S. Perrig
HILM
24
3
0
29 Mar 2023
XAIR: A Framework of Explainable AI in Augmented Reality
XAIR: A Framework of Explainable AI in Augmented Reality
Xuhai Xu
Anna Yu
Tanya R. Jonker
Kashyap Todi
Feiyu Lu
...
Narine Kokhlikyan
Fulton Wang
P. Sorenson
Sophie Kahyun Kim
Hrvoje Benko
39
49
0
28 Mar 2023
PaGE-Link: Path-based Graph Neural Network Explanation for Heterogeneous
  Link Prediction
PaGE-Link: Path-based Graph Neural Network Explanation for Heterogeneous Link Prediction
Shichang Zhang
Jiani Zhang
Xiang Song
Soji Adeshina
Da Zheng
Christos Faloutsos
Yizhou Sun
LRM
35
31
0
24 Feb 2023
Bridging the Transparency Gap: What Can Explainable AI Learn From the AI
  Act?
Bridging the Transparency Gap: What Can Explainable AI Learn From the AI Act?
Balint Gyevnar
Nick Ferguson
Burkhard Schafer
38
15
0
21 Feb 2023
Directive Explanations for Monitoring the Risk of Diabetes Onset:
  Introducing Directive Data-Centric Explanations and Combinations to Support
  What-If Explorations
Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations
Aditya Bhattacharya
Jeroen Ooge
Gregor Stiglic
K. Verbert
16
30
0
21 Feb 2023
On the Impact of Explanations on Understanding of Algorithmic
  Decision-Making
On the Impact of Explanations on Understanding of Algorithmic Decision-Making
Timothée Schmude
Laura M. Koesten
Torsten Moller
Sebastian Tschiatschek
24
15
0
16 Feb 2023
Understanding User Preferences in Explainable Artificial Intelligence: A
  Survey and a Mapping Function Proposal
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal
M. Hashemi
Ali Darejeh
Francisco Cruz
40
3
0
07 Feb 2023
FATE in AI: Towards Algorithmic Inclusivity and Accessibility
FATE in AI: Towards Algorithmic Inclusivity and Accessibility
Isa Inuwa-Dutse
29
7
0
03 Jan 2023
Explainable Artificial Intelligence: Precepts, Methods, and
  Opportunities for Research in Construction
Explainable Artificial Intelligence: Precepts, Methods, and Opportunities for Research in Construction
Peter E. D. Love
Weili Fang
J. Matthews
Stuart Porter
Hanbin Luo
L. Ding
XAI
29
7
0
12 Nov 2022
Explainable Artificial Intelligence in Construction: The Content,
  Context, Process, Outcome Evaluation Framework
Explainable Artificial Intelligence in Construction: The Content, Context, Process, Outcome Evaluation Framework
Peter E. D. Love
J. Matthews
Weili Fang
Stuart Porter
Hanbin Luo
L. Ding
38
2
0
12 Nov 2022
Privacy Explanations - A Means to End-User Trust
Privacy Explanations - A Means to End-User Trust
Wasja Brunotte
Alexander Specht
Larissa Chazette
K. Schneider
27
25
0
18 Oct 2022
Do We Need Explainable AI in Companies? Investigation of Challenges,
  Expectations, and Chances from Employees' Perspective
Do We Need Explainable AI in Companies? Investigation of Challenges, Expectations, and Chances from Employees' Perspective
Katharina Weitz
C. Dang
Elisabeth André
6
1
0
07 Oct 2022
"Help Me Help the AI": Understanding How Explainability Can Support
  Human-AI Interaction
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
Andrés Monroy-Hernández
38
107
0
02 Oct 2022
AI, Opacity, and Personal Autonomy
AI, Opacity, and Personal Autonomy
Bram Vaassen
FaML
MLAU
17
24
0
25 Sep 2022
Explanations, Fairness, and Appropriate Reliance in Human-AI
  Decision-Making
Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Jakob Schoeffer
Maria De-Arteaga
Niklas Kuehl
FaML
45
46
0
23 Sep 2022
Concept Activation Regions: A Generalized Framework For Concept-Based
  Explanations
Concept Activation Regions: A Generalized Framework For Concept-Based Explanations
Jonathan Crabbé
M. Schaar
59
46
0
22 Sep 2022
Quality Diversity Evolutionary Learning of Decision Trees
Quality Diversity Evolutionary Learning of Decision Trees
Andrea Ferigo
Leonardo Lucio Custode
Giovanni Iacca
26
12
0
17 Aug 2022
A Means-End Account of Explainable Artificial Intelligence
A Means-End Account of Explainable Artificial Intelligence
O. Buchholz
XAI
29
12
0
09 Aug 2022
"There Is Not Enough Information": On the Effects of Explanations on
  Perceptions of Informational Fairness and Trustworthiness in Automated
  Decision-Making
"There Is Not Enough Information": On the Effects of Explanations on Perceptions of Informational Fairness and Trustworthiness in Automated Decision-Making
Jakob Schoeffer
Niklas Kuehl
Yvette Machowski
FaML
25
52
0
11 May 2022
The Conflict Between Explainable and Accountable Decision-Making
  Algorithms
The Conflict Between Explainable and Accountable Decision-Making Algorithms
Gabriel Lima
Nina Grgić-Hlavca
Jin Keun Jeong
M. Cha
13
37
0
11 May 2022
Creative Uses of AI Systems and their Explanations: A Case Study from
  Insurance
Creative Uses of AI Systems and their Explanations: A Case Study from Insurance
Michaela Benk
Raphael P. Weibel
Andrea Ferrario
25
2
0
02 May 2022
A Human-Centric Perspective on Fairness and Transparency in Algorithmic
  Decision-Making
A Human-Centric Perspective on Fairness and Transparency in Algorithmic Decision-Making
Jakob Schoeffer
FaML
28
3
0
29 Apr 2022
The Value of Measuring Trust in AI - A Socio-Technical System
  Perspective
The Value of Measuring Trust in AI - A Socio-Technical System Perspective
Michaela Benk
Suzanne Tolmeijer
F. Wangenheim
Andrea Ferrario
17
10
0
28 Apr 2022
On the Relationship Between Explanations, Fairness Perceptions, and
  Decisions
On the Relationship Between Explanations, Fairness Perceptions, and Decisions
Jakob Schoeffer
Maria De-Arteaga
Niklas Kuehl
FaML
30
6
0
27 Apr 2022
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
19
1
0
30 Jan 2022
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial
  Contexts
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts
Sebastian Bordt
Michèle Finck
Eric Raidl
U. V. Luxburg
AILaw
39
77
0
25 Jan 2022
Explainability Is in the Mind of the Beholder: Establishing the
  Foundations of Explainable Artificial Intelligence
Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence
Kacper Sokol
Peter A. Flach
39
20
0
29 Dec 2021
Explanation as a process: user-centric construction of multi-level and
  multi-modal explanations
Explanation as a process: user-centric construction of multi-level and multi-modal explanations
Bettina Finzel
David E. Tafler
Stephan Scheele
Ute Schmid
17
10
0
07 Oct 2021
Appropriate Fairness Perceptions? On the Effectiveness of Explanations
  in Enabling People to Assess the Fairness of Automated Decision Systems
Appropriate Fairness Perceptions? On the Effectiveness of Explanations in Enabling People to Assess the Fairness of Automated Decision Systems
Jakob Schoeffer
Niklas Kuehl
22
26
0
14 Aug 2021
Cases for Explainable Software Systems:Characteristics and Examples
Cases for Explainable Software Systems:Characteristics and Examples
Mersedeh Sadeghi
V. Klös
Andreas Vogelsang
11
16
0
12 Aug 2021
MAIR: Framework for mining relationships between research articles,
  strategies, and regulations in the field of explainable artificial
  intelligence
MAIR: Framework for mining relationships between research articles, strategies, and regulations in the field of explainable artificial intelligence
Stanisław Giziński
Michal Kuzba
Bartosz Pieliñski
Julian Sienkiewicz
Stanislaw Laniewski
P. Biecek
21
1
0
29 Jul 2021
How to choose an Explainability Method? Towards a Methodical
  Implementation of XAI in Practice
How to choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice
T. Vermeire
Thibault Laugel
X. Renard
David Martens
Marcin Detyniecki
18
14
0
09 Jul 2021
Explanatory Pluralism in Explainable AI
Explanatory Pluralism in Explainable AI
Yiheng Yao
XAI
22
4
0
26 Jun 2021
The Care Label Concept: A Certification Suite for Trustworthy and
  Resource-Aware Machine Learning
The Care Label Concept: A Certification Suite for Trustworthy and Resource-Aware Machine Learning
K. Morik
Helena Kotthaus
Lukas Heppe
Danny Heinrich
Raphael Fischer
Andrea Pauly
Nico Piatkowski
23
4
0
01 Jun 2021
Yes We Care! -- Certification for Machine Learning Methods through the
  Care Label Framework
Yes We Care! -- Certification for Machine Learning Methods through the Care Label Framework
K. Morik
Helena Kotthaus
Raphael Fischer
Sascha Mucke
Matthias Jakobs
Nico Piatkowski
Andrea Pauly
Lukas Heppe
Danny Heinrich
11
11
0
21 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
29
184
0
15 May 2021
Previous
123
Next