ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.07817
  4. Cited By
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
  Stakeholder Perspective on XAI and a Conceptual Model Guiding
  Interdisciplinary XAI Research

What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research

15 February 2021
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
    XAI
ArXivPDFHTML

Papers citing "What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research"

50 / 105 papers shown
Title
A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support
A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support
Felix Liedeker
Olivia Sanchez-Graillet
Moana Seidler
Christian Brandt
Jörg Wellmer
Philipp Cimiano
16
0
0
15 May 2025
Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods
Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods
Mahdi Dhaini
Ege Erdogan
Nils Feldhus
Gjergji Kasneci
46
0
0
02 May 2025
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
Felix Kares
Timo Speith
Hanwei Zhang
Markus Langer
FAtt
XAI
38
0
0
23 Apr 2025
Mapping the Trust Terrain: LLMs in Software Engineering -- Insights and Perspectives
Mapping the Trust Terrain: LLMs in Software Engineering -- Insights and Perspectives
Dipin Khati
Yijin Liu
David Nader-Palacio
Yixuan Zhang
Denys Poshyvanyk
51
0
0
18 Mar 2025
A Unified Framework with Novel Metrics for Evaluating the Effectiveness of XAI Techniques in LLMs
A Unified Framework with Novel Metrics for Evaluating the Effectiveness of XAI Techniques in LLMs
Melkamu Mersha
Mesay Gemeda Yigezu
Hassan Shakil
Ali Al shami
SangHyun Byun
Jugal Kalita
59
0
0
06 Mar 2025
Optimizing Multi-Hop Document Retrieval Through Intermediate Representations
Jiaen Lin
Jingyu Liu
40
0
0
02 Mar 2025
ACE, Action and Control via Explanations: A Proposal for LLMs to Provide Human-Centered Explainability for Multimodal AI Assistants
ACE, Action and Control via Explanations: A Proposal for LLMs to Provide Human-Centered Explainability for Multimodal AI Assistants
E. A. Watkins
Emanuel Moss
R. Manuvinakurike
Meng Shi
R. Beckwith
G. Raffa
LLMAG
44
2
0
27 Feb 2025
A Scoresheet for Explainable AI
Michael Winikoff
John Thangarajah
Sebastian Rodriguez
55
0
0
14 Feb 2025
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
Greta Warren
Irina Shklovski
Isabelle Augenstein
OffRL
70
4
0
13 Feb 2025
ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
Chhavi Yadav
Evan Monroe Laufer
Dan Boneh
Kamalika Chaudhuri
85
0
0
06 Feb 2025
Is Conversational XAI All You Need? Human-AI Decision Making With a Conversational XAI Assistant
Is Conversational XAI All You Need? Human-AI Decision Making With a Conversational XAI Assistant
Gaole He
Nilay Aishwarya
U. Gadiraju
38
6
0
29 Jan 2025
Evaluating the Effectiveness of XAI Techniques for Encoder-Based Language Models
Melkamu Mersha
Mesay Gemeda Yigezu
Jugal Kalita
ELM
49
3
0
26 Jan 2025
Bridging the Communication Gap: Evaluating AI Labeling Practices for Trustworthy AI Development
Bridging the Communication Gap: Evaluating AI Labeling Practices for Trustworthy AI Development
Raphael Fischer
Magdalena Wischnewski
Alexander van der Staay
Katharina Poitz
Christian Janiesch
Thomas Liebig
50
0
0
21 Jan 2025
Human-Readable Programs as Actors of Reinforcement Learning Agents Using
  Critic-Moderated Evolution
Human-Readable Programs as Actors of Reinforcement Learning Agents Using Critic-Moderated Evolution
Senne Deproost
Denis Steckelmacher
Ann Nowé
26
0
0
29 Oct 2024
Give Me a Choice: The Consequences of Restricting Choices Through
  AI-Support for Perceived Autonomy, Motivational Variables, and Decision
  Performance
Give Me a Choice: The Consequences of Restricting Choices Through AI-Support for Perceived Autonomy, Motivational Variables, and Decision Performance
Cedric Faas
Richard Bergs
Sarah Sterz
Markus Langer
Anna Maria Feit
18
1
0
10 Oct 2024
Explainable AI: Definition and attributes of a good explanation for
  health AI
Explainable AI: Definition and attributes of a good explanation for health AI
E. Kyrimi
S. McLachlan
Jared M Wohlgemut
Zane B Perkins
David A. Lagnado
W. Marsh
the ExAIDSS Expert Group
XAI
26
1
0
09 Sep 2024
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Melkamu Mersha
Khang Lam
Joseph Wood
Ali AlShami
Jugal Kalita
XAI
AI4TS
67
28
0
30 Aug 2024
Probabilistic Conceptual Explainers: Trustworthy Conceptual Explanations
  for Vision Foundation Models
Probabilistic Conceptual Explainers: Trustworthy Conceptual Explanations for Vision Foundation Models
Hengyi Wang
Shiwei Tan
Hao Wang
BDL
42
6
0
18 Jun 2024
Understanding Inter-Concept Relationships in Concept-Based Models
Understanding Inter-Concept Relationships in Concept-Based Models
Naveen Raman
M. Zarlenga
M. Jamnik
27
4
0
28 May 2024
Understanding Stakeholders' Perceptions and Needs Across the LLM Supply
  Chain
Understanding Stakeholders' Perceptions and Needs Across the LLM Supply Chain
Agathe Balayn
Lorenzo Corti
Fanny Rancourt
Fabio Casati
U. Gadiraju
29
5
0
25 May 2024
Exploring Commonalities in Explanation Frameworks: A Multi-Domain Survey Analysis
Exploring Commonalities in Explanation Frameworks: A Multi-Domain Survey Analysis
Eduard Barbu
Marharytha Domnich
Raul Vicente
Nikos Sakkas
André Morim
46
1
0
20 May 2024
Challenging the Human-in-the-loop in Algorithmic Decision-making
Challenging the Human-in-the-loop in Algorithmic Decision-making
Sebastian Tschiatschek
Eugenia Stamboliev
Timothée Schmude
Mark Coeckelbergh
Laura M. Koesten
35
1
0
17 May 2024
Explainable Interface for Human-Autonomy Teaming: A Survey
Explainable Interface for Human-Autonomy Teaming: A Survey
Xiangqi Kong
Yang Xing
Antonios Tsourdos
Ziyue Wang
Weisi Guo
Adolfo Perrusquía
Andreas Wikander
37
3
0
04 May 2024
Mapping the Potential of Explainable AI for Fairness Along the AI
  Lifecycle
Mapping the Potential of Explainable AI for Fairness Along the AI Lifecycle
Luca Deck
Astrid Schomacker
Timo Speith
Jakob Schöffer
Lena Kästner
Niklas Kühl
41
4
0
29 Apr 2024
SIDEs: Separating Idealization from Deceptive Explanations in xAI
SIDEs: Separating Idealization from Deceptive Explanations in xAI
Emily Sullivan
49
2
0
25 Apr 2024
CAGE: Causality-Aware Shapley Value for Global Explanations
CAGE: Causality-Aware Shapley Value for Global Explanations
Nils Ole Breuer
Andreas Sauter
Majid Mohammadi
Erman Acar
FAtt
42
2
0
17 Apr 2024
Incremental XAI: Memorable Understanding of AI with Incremental
  Explanations
Incremental XAI: Memorable Understanding of AI with Incremental Explanations
Jessica Y. Bo
Pan Hao
Brian Y Lim
CLL
26
6
0
10 Apr 2024
Designing for Complementarity: A Conceptual Framework to Go Beyond the
  Current Paradigm of Using XAI in Healthcare
Designing for Complementarity: A Conceptual Framework to Go Beyond the Current Paradigm of Using XAI in Healthcare
Elisa Rubegni
Omran Ayoub
Stefania Maria Rita Rizzo
Marco Barbero
G. Bernegger
Francesca Faraci
Francesca Mangili
Emiliano Soldini
P. Trimboli
Alessandro Facchini
29
1
0
06 Apr 2024
Explainability in JupyterLab and Beyond: Interactive XAI Systems for
  Integrated and Collaborative Workflows
Explainability in JupyterLab and Beyond: Interactive XAI Systems for Integrated and Collaborative Workflows
G. Guo
Dustin L. Arendt
Alex Endert
40
1
0
02 Apr 2024
What Does Evaluation of Explainable Artificial Intelligence Actually
  Tell Us? A Case for Compositional and Contextual Validation of XAI Building
  Blocks
What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks
Kacper Sokol
Julia E. Vogt
31
11
0
19 Mar 2024
What is the focus of XAI in UI design? Prioritizing UI design principles
  for enhancing XAI user experience
What is the focus of XAI in UI design? Prioritizing UI design principles for enhancing XAI user experience
Dian Lei
Yao He
Jianyou Zeng
26
1
0
21 Feb 2024
Current and future roles of artificial intelligence in retinopathy of
  prematurity
Current and future roles of artificial intelligence in retinopathy of prematurity
Ali Jafarizadeh
Shadi Farabi Maleki
Parnia Pouya
Navid Sobhi
M. Abdollahi
...
Houshyar Asadi
R. Alizadehsani
Ruyan Tan
Sheikh Mohammad Shariful Islam
U. R. Acharya
AI4CE
17
6
0
15 Feb 2024
Explaining Probabilistic Models with Distributional Values
Explaining Probabilistic Models with Distributional Values
Luca Franceschi
Michele Donini
Cédric Archambeau
Matthias Seeger
FAtt
21
2
0
15 Feb 2024
Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic
  Review
Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review
Anton Kuznietsov
Balint Gyevnar
Cheng Wang
Steven Peters
Stefano V. Albrecht
XAI
28
26
0
08 Feb 2024
Information That Matters: Exploring Information Needs of People Affected
  by Algorithmic Decisions
Information That Matters: Exploring Information Needs of People Affected by Algorithmic Decisions
Timothée Schmude
Laura M. Koesten
Torsten Moller
Sebastian Tschiatschek
28
3
0
24 Jan 2024
Triamese-ViT: A 3D-Aware Method for Robust Brain Age Estimation from
  MRIs
Triamese-ViT: A 3D-Aware Method for Robust Brain Age Estimation from MRIs
Zhaonian Zhang
Richard M. Jiang
35
2
0
13 Jan 2024
Path-based Explanation for Knowledge Graph Completion
Path-based Explanation for Knowledge Graph Completion
Heng Chang
Jiangnan Ye
Alejo López-Ávila
Jinhua Du
Jia Li
30
3
0
04 Jan 2024
Pyreal: A Framework for Interpretable ML Explanations
Pyreal: A Framework for Interpretable ML Explanations
Alexandra Zytek
Wei-En Wang
Dongyu Liu
Laure Berti-Equille
K. Veeramachaneni
LRM
37
0
0
20 Dec 2023
The Metacognitive Demands and Opportunities of Generative AI
The Metacognitive Demands and Opportunities of Generative AI
Lev Tankelevitch
Viktor Kewenig
Auste Simkute
A. E. Scott
Advait Sarkar
Abigail Sellen
Sean Rintel
AI4CE
28
96
0
18 Dec 2023
Responsibility in Extensive Form Games
Responsibility in Extensive Form Games
Qi Shi
17
3
0
12 Dec 2023
"I Want It That Way": Enabling Interactive Decision Support Using Large
  Language Models and Constraint Programming
"I Want It That Way": Enabling Interactive Decision Support Using Large Language Models and Constraint Programming
Connor Lawless
Jakob Schoeffer
Lindy Le
Kael Rowan
Shilad Sen
Cristina St. Hill
Jina Suh
Bahar Sarrafzadeh
38
8
0
12 Dec 2023
Lessons from Usable ML Deployments and Application to Wind Turbine
  Monitoring
Lessons from Usable ML Deployments and Application to Wind Turbine Monitoring
Alexandra Zytek
Wei-En Wang
S. Koukoura
K. Veeramachaneni
37
0
0
05 Dec 2023
Explainable Product Classification for Customs
Explainable Product Classification for Customs
Eunji Lee
Sihyeon Kim
Sundong Kim
Soyeon Jung
Heeja Kim
Meeyoung Cha
16
6
0
18 Nov 2023
Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open
  Challenges and Interdisciplinary Research Directions
Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions
Luca Longo
Mario Brcic
Federico Cabitza
Jaesik Choi
Roberto Confalonieri
...
Andrés Páez
Wojciech Samek
Johannes Schneider
Timo Speith
Simone Stumpf
29
189
0
30 Oct 2023
A Critical Survey on Fairness Benefits of Explainable AI
A Critical Survey on Fairness Benefits of Explainable AI
Luca Deck
Jakob Schoeffer
Maria De-Arteaga
Niklas Kühl
28
10
0
15 Oct 2023
The Impact of Explanations on Fairness in Human-AI Decision-Making:
  Protected vs Proxy Features
The Impact of Explanations on Fairness in Human-AI Decision-Making: Protected vs Proxy Features
Navita Goyal
Connor Baumler
Tin Nguyen
Hal Daumé
24
6
0
12 Oct 2023
Explainable Artificial Intelligence for Drug Discovery and Development
  -- A Comprehensive Survey
Explainable Artificial Intelligence for Drug Discovery and Development -- A Comprehensive Survey
R. Alizadehsani
Solomon Sunday Oyelere
Sadiq Hussain
Rene Ripardo Calixto
V. H. C. de Albuquerque
M. Roshanzamir
Mohamed Rahouti
Senthil Kumar Jagatheesaperumal
37
17
0
21 Sep 2023
Beyond XAI:Obstacles Towards Responsible AI
Beyond XAI:Obstacles Towards Responsible AI
Yulu Pi
34
2
0
07 Sep 2023
Software Doping Analysis for Human Oversight
Software Doping Analysis for Human Oversight
Sebastian Biewer
Kevin Baum
Sarah Sterz
Holger Hermanns
Sven Hetmank
Markus Langer
Anne Lauber-Rönsberg
Franz Lehr
20
4
0
11 Aug 2023
Generative Perturbation Analysis for Probabilistic Black-Box Anomaly
  Attribution
Generative Perturbation Analysis for Probabilistic Black-Box Anomaly Attribution
T. Idé
Naoki Abe
35
4
0
09 Aug 2023
123
Next