ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.00799
  4. Cited By
Charting the Sociotechnical Gap in Explainable AI: A Framework to
  Address the Gap in XAI

Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI

1 February 2023
Upol Ehsan
Koustuv Saha
M. D. Choudhury
Mark O. Riedl
ArXivPDFHTML

Papers citing "Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI"

28 / 28 papers shown
Title
AI Chatbots for Mental Health: Values and Harms from Lived Experiences of Depression
AI Chatbots for Mental Health: Values and Harms from Lived Experiences of Depression
Dong Whi Yoo
Jiayue Melissa Shi
Violeta J. Rodriguez
Koustuv Saha
AI4MH
54
0
0
26 Apr 2025
A Framework for Situating Innovations, Opportunities, and Challenges in Advancing Vertical Systems with Large AI Models
A Framework for Situating Innovations, Opportunities, and Challenges in Advancing Vertical Systems with Large AI Models
Gaurav Verma
Jiawei Zhou
Mohit Chandra
Srijan Kumar
M. D. Choudhury
53
0
0
03 Apr 2025
"Impressively Scary:" Exploring User Perceptions and Reactions to Unraveling Machine Learning Models in Social Media Applications
Jack West
Bengisu Cagiltay
Shirley Zhang
Jingjie Li
Kassem Fawaz
Suman Banerjee
67
0
0
05 Mar 2025
Human-centered explanation does not fit all: The interplay of sociotechnical, cognitive, and individual factors in the effect AI explanations in algorithmic decision-making
Human-centered explanation does not fit all: The interplay of sociotechnical, cognitive, and individual factors in the effect AI explanations in algorithmic decision-making
Yongsu Ahn
Yu-Ru Lin
Malihe Alikhani
Eunjeong Cheon
126
0
0
17 Feb 2025
Human-Centric eXplainable AI in Education
Human-Centric eXplainable AI in Education
Subhankar Maity
Aniket Deroy
ELM
23
1
0
18 Oct 2024
AI Thinking: A framework for rethinking artificial intelligence in
  practice
AI Thinking: A framework for rethinking artificial intelligence in practice
Denis Newman-Griffis
34
0
0
26 Aug 2024
Explainable AI Reloaded: Challenging the XAI Status Quo in the Era of
  Large Language Models
Explainable AI Reloaded: Challenging the XAI Status Quo in the Era of Large Language Models
Upol Ehsan
Mark O. Riedl
25
2
0
09 Aug 2024
Categorizing Sources of Information for Explanations in Conversational
  AI Systems for Older Adults Aging in Place
Categorizing Sources of Information for Explanations in Conversational AI Systems for Older Adults Aging in Place
N. Mathur
Tamara Zubatiy
Elizabeth Mynatt
16
3
0
07 Jun 2024
Large Language Models Cannot Explain Themselves
Large Language Models Cannot Explain Themselves
Advait Sarkar
LRM
40
7
0
07 May 2024
Illuminating the Unseen: Investigating the Context-induced Harms in
  Behavioral Sensing
Illuminating the Unseen: Investigating the Context-induced Harms in Behavioral Sensing
Han Zhang
V. D. Swain
Leijie Wang
Nan Gao
Yilun Sheng
Xuhai Xu
Flora D. Salim
Koustuv Saha
A. Dey
Jennifer Mankoff
39
0
0
23 Apr 2024
Exploring Algorithmic Explainability: Generating Explainable AI Insights
  for Personalized Clinical Decision Support Focused on Cannabis Intoxication
  in Young Adults
Exploring Algorithmic Explainability: Generating Explainable AI Insights for Personalized Clinical Decision Support Focused on Cannabis Intoxication in Young Adults
Tongze Zhang
Tammy Chung
Anind Dey
Sang Won Bae
31
3
0
22 Apr 2024
Explainability in JupyterLab and Beyond: Interactive XAI Systems for
  Integrated and Collaborative Workflows
Explainability in JupyterLab and Beyond: Interactive XAI Systems for Integrated and Collaborative Workflows
G. Guo
Dustin L. Arendt
Alex Endert
45
1
0
02 Apr 2024
"It is there, and you need it, so why do you not use it?" Achieving
  better adoption of AI systems by domain experts, in the case study of natural
  science research
"It is there, and you need it, so why do you not use it?" Achieving better adoption of AI systems by domain experts, in the case study of natural science research
Auste Simkute
Ewa Luger
Michael Evans
Rhianne Jones
29
1
0
25 Mar 2024
What Does Evaluation of Explainable Artificial Intelligence Actually
  Tell Us? A Case for Compositional and Contextual Validation of XAI Building
  Blocks
What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks
Kacper Sokol
Julia E. Vogt
37
11
0
19 Mar 2024
Are We Asking the Right Questions?: Designing for Community
  Stakeholders' Interactions with AI in Policing
Are We Asking the Right Questions?: Designing for Community Stakeholders' Interactions with AI in Policing
Md. Romael Haque
Devansh Saxena
Katherine Weathington
Joseph Chudzik
Shion Guha
35
9
0
08 Feb 2024
A Scoping Study of Evaluation Practices for Responsible AI Tools: Steps
  Towards Effectiveness Evaluations
A Scoping Study of Evaluation Practices for Responsible AI Tools: Steps Towards Effectiveness Evaluations
G. Berman
Nitesh Goyal
Michael A. Madaio
ELM
45
20
0
30 Jan 2024
Information That Matters: Exploring Information Needs of People Affected
  by Algorithmic Decisions
Information That Matters: Exploring Information Needs of People Affected by Algorithmic Decisions
Timothée Schmude
Laura M. Koesten
Torsten Moller
Sebastian Tschiatschek
33
3
0
24 Jan 2024
InterVLS: Interactive Model Understanding and Improvement with
  Vision-Language Surrogates
InterVLS: Interactive Model Understanding and Improvement with Vision-Language Surrogates
Jinbin Huang
Wenbin He
Liangke Gou
Liu Ren
Chris Bryan
VLM
21
1
0
06 Nov 2023
The Value-Sensitive Conversational Agent Co-Design Framework
The Value-Sensitive Conversational Agent Co-Design Framework
Malak Sadek
Rafael A. Calvo
C. Mougenot
3DV
23
3
0
18 Oct 2023
An Information Bottleneck Characterization of the Understanding-Workload
  Tradeoff
An Information Bottleneck Characterization of the Understanding-Workload Tradeoff
Lindsay M. Sanneman
Mycal Tucker
Julie A. Shah
32
2
0
11 Oct 2023
Algorithmic Harms in Child Welfare: Uncertainties in Practice,
  Organization, and Street-level Decision-Making
Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decision-Making
Devansh Saxena
Shion Guha
27
19
0
09 Aug 2023
A Systematic Literature Review of Human-Centered, Ethical, and
  Responsible AI
A Systematic Literature Review of Human-Centered, Ethical, and Responsible AI
Mohammad Tahaei
Marios Constantinides
Daniele Quercia
Michael J. Muller
AI4TS
54
8
0
10 Feb 2023
Mind the Gap! Bridging Explainable Artificial Intelligence and Human
  Understanding with Luhmann's Functional Theory of Communication
Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication
B. Keenan
Kacper Sokol
21
7
0
07 Feb 2023
Seamful XAI: Operationalizing Seamful Design in Explainable AI
Seamful XAI: Operationalizing Seamful Design in Explainable AI
Upol Ehsan
Q. V. Liao
Samir Passi
Mark O. Riedl
Hal Daumé
27
20
0
12 Nov 2022
Explanations, Fairness, and Appropriate Reliance in Human-AI
  Decision-Making
Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Jakob Schoeffer
Maria De-Arteaga
Niklas Kuehl
FaML
45
46
0
23 Sep 2022
Explainability Pitfalls: Beyond Dark Patterns in Explainable AI
Explainability Pitfalls: Beyond Dark Patterns in Explainable AI
Upol Ehsan
Mark O. Riedl
XAI
SILM
59
58
0
26 Sep 2021
How to Support Users in Understanding Intelligent Systems? Structuring
  the Discussion
How to Support Users in Understanding Intelligent Systems? Structuring the Discussion
Malin Eiband
Daniel Buschek
H. Hussmann
45
28
0
22 Jan 2020
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,684
0
28 Feb 2017
1