ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.05057
  4. Cited By
Sensible AI: Re-imagining Interpretability and Explainability using
  Sensemaking Theory

Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory

10 May 2022
Harmanpreet Kaur
Eytan Adar
Eric Gilbert
Cliff Lampe
ArXivPDFHTML

Papers citing "Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory"

29 / 29 papers shown
Title
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions
Somayeh Molaei
Lionel P. Robert
Nikola Banovic
26
0
0
09 May 2025
AI Chatbots for Mental Health: Values and Harms from Lived Experiences of Depression
AI Chatbots for Mental Health: Values and Harms from Lived Experiences of Depression
Dong Whi Yoo
Jiayue Melissa Shi
Violeta J. Rodriguez
Koustuv Saha
AI4MH
54
0
0
26 Apr 2025
Knowledge-Augmented Explainable and Interpretable Learning for Anomaly Detection and Diagnosis
Martin Atzmueller
Tim Bohne
Patricia Windler
85
0
0
28 Nov 2024
Thoughtful Adoption of NLP for Civic Participation: Understanding
  Differences Among Policymakers
Thoughtful Adoption of NLP for Civic Participation: Understanding Differences Among Policymakers
Jose A. Guridi
Cristobal Cheyre
Qian Yang
34
1
0
30 Oct 2024
ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs
ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs
Hua Shen
Tiffany Knearem
Reshmi Ghosh
Yu-Ju Yang
Tanushree Mitra
Yun Huang
Yun Huang
61
0
0
15 Sep 2024
HCC Is All You Need: Alignment-The Sensible Kind Anyway-Is Just
  Human-Centered Computing
HCC Is All You Need: Alignment-The Sensible Kind Anyway-Is Just Human-Centered Computing
Eric Gilbert
13
2
0
30 Apr 2024
Incremental XAI: Memorable Understanding of AI with Incremental
  Explanations
Incremental XAI: Memorable Understanding of AI with Incremental Explanations
Jessica Y. Bo
Pan Hao
Brian Y Lim
CLL
39
6
0
10 Apr 2024
What Does Evaluation of Explainable Artificial Intelligence Actually
  Tell Us? A Case for Compositional and Contextual Validation of XAI Building
  Blocks
What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks
Kacper Sokol
Julia E. Vogt
37
11
0
19 Mar 2024
Farsight: Fostering Responsible AI Awareness During AI Application
  Prototyping
Farsight: Fostering Responsible AI Awareness During AI Application Prototyping
Zijie J. Wang
Chinmay Kulkarni
Lauren Wilcox
Michael Terry
Michael A. Madaio
40
43
0
23 Feb 2024
A Scoping Study of Evaluation Practices for Responsible AI Tools: Steps
  Towards Effectiveness Evaluations
A Scoping Study of Evaluation Practices for Responsible AI Tools: Steps Towards Effectiveness Evaluations
G. Berman
Nitesh Goyal
Michael A. Madaio
ELM
45
20
0
30 Jan 2024
The Participatory Turn in AI Design: Theoretical Foundations and the
  Current State of Practice
The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice
Fernando Delgado
Stephen Yang
Michael A. Madaio
Qian Yang
76
100
0
02 Oct 2023
Applying Interdisciplinary Frameworks to Understand Algorithmic
  Decision-Making
Applying Interdisciplinary Frameworks to Understand Algorithmic Decision-Making
Timothée Schmude
Laura M. Koesten
Torsten Moller
Sebastian Tschiatschek
38
1
0
26 May 2023
Explaining the ghosts: Feminist intersectional XAI and cartography as
  methods to account for invisible labour
Explaining the ghosts: Feminist intersectional XAI and cartography as methods to account for invisible labour
Goda Klumbytė
Hannah Piehl
Claude Draude
15
1
0
05 May 2023
A Meta-heuristic Approach to Estimate and Explain Classifier Uncertainty
A Meta-heuristic Approach to Estimate and Explain Classifier Uncertainty
A. Houston
Georgina Cosma
23
1
0
20 Apr 2023
Can Fairness be Automated? Guidelines and Opportunities for
  Fairness-aware AutoML
Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML
Hilde J. P. Weerts
Florian Pfisterer
Matthias Feurer
Katharina Eggensperger
Eddie Bergman
Noor H. Awad
Joaquin Vanschoren
Mykola Pechenizkiy
B. Bischl
Frank Hutter
FaML
38
18
0
15 Mar 2023
Who's Thinking? A Push for Human-Centered Evaluation of LLMs using the
  XAI Playbook
Who's Thinking? A Push for Human-Centered Evaluation of LLMs using the XAI Playbook
Teresa Datta
John P. Dickerson
34
10
0
10 Mar 2023
Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven
  decision support
Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven decision support
Tim Miller
27
120
0
24 Feb 2023
Designerly Understanding: Information Needs for Model Transparency to
  Support Design Ideation for AI-Powered User Experience
Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience
Q. V. Liao
Hariharan Subramonyam
Jennifer Wang
Jennifer Wortman Vaughan
HAI
33
58
0
21 Feb 2023
On the Impact of Explanations on Understanding of Algorithmic
  Decision-Making
On the Impact of Explanations on Understanding of Algorithmic Decision-Making
Timothée Schmude
Laura M. Koesten
Torsten Moller
Sebastian Tschiatschek
24
15
0
16 Feb 2023
A Systematic Literature Review of Human-Centered, Ethical, and
  Responsible AI
A Systematic Literature Review of Human-Centered, Ethical, and Responsible AI
Mohammad Tahaei
Marios Constantinides
Daniele Quercia
Michael J. Muller
AI4TS
54
8
0
10 Feb 2023
Seamful XAI: Operationalizing Seamful Design in Explainable AI
Seamful XAI: Operationalizing Seamful Design in Explainable AI
Upol Ehsan
Q. V. Liao
Samir Passi
Mark O. Riedl
Hal Daumé
30
20
0
12 Nov 2022
On the Importance of Application-Grounded Experimental Design for
  Evaluating Explainable ML Methods
On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods
Kasun Amarasinghe
Kit T. Rodolfa
Sérgio Jesus
Valerie Chen
Vladimir Balayan
Pedro Saleiro
P. Bizarro
Ameet Talwalkar
Rayid Ghani
27
0
0
24 Jun 2022
Think About the Stakeholders First! Towards an Algorithmic Transparency
  Playbook for Regulatory Compliance
Think About the Stakeholders First! Towards an Algorithmic Transparency Playbook for Regulatory Compliance
Andrew Bell
O. Nov
Julia Stoyanovich
27
26
0
10 Jun 2022
Explainability Is in the Mind of the Beholder: Establishing the
  Foundations of Explainable Artificial Intelligence
Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence
Kacper Sokol
Peter A. Flach
39
20
0
29 Dec 2021
From Human Explanation to Model Interpretability: A Framework Based on
  Weight of Evidence
From Human Explanation to Model Interpretability: A Framework Based on Weight of Evidence
David Alvarez-Melis
Harmanpreet Kaur
Hal Daumé
Hanna M. Wallach
Jennifer Wortman Vaughan
FAtt
51
27
0
27 Apr 2021
Trust in Data Science: Collaboration, Translation, and Accountability in
  Corporate Data Science Projects
Trust in Data Science: Collaboration, Translation, and Accountability in Corporate Data Science Projects
Samir Passi
S. Jackson
171
108
0
09 Feb 2020
Improving fairness in machine learning systems: What do industry
  practitioners need?
Improving fairness in machine learning systems: What do industry practitioners need?
Kenneth Holstein
Jennifer Wortman Vaughan
Hal Daumé
Miroslav Dudík
Hanna M. Wallach
FaML
HAI
192
742
0
13 Dec 2018
A causal framework for explaining the predictions of black-box
  sequence-to-sequence models
A causal framework for explaining the predictions of black-box sequence-to-sequence models
David Alvarez-Melis
Tommi Jaakkola
CML
232
201
0
06 Jul 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,684
0
28 Feb 2017
1