ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.13509
  4. Cited By
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations

The Who in XAI: How AI Background Shapes Perceptions of AI Explanations

28 July 2021
Upol Ehsan
Samir Passi
Q. V. Liao
Larry Chan
I-Hsiang Lee
Michael J. Muller
Mark O. Riedl
ArXivPDFHTML

Papers citing "The Who in XAI: How AI Background Shapes Perceptions of AI Explanations"

48 / 48 papers shown
Title
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions
Somayeh Molaei
Lionel P. Robert
Nikola Banovic
26
0
0
09 May 2025
Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities
Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities
M. Domnich
Rasmus Moorits Veski
Julius Valja
Kadi Tulver
Raul Vicente
FAtt
40
0
0
07 Apr 2025
"Impressively Scary:" Exploring User Perceptions and Reactions to Unraveling Machine Learning Models in Social Media Applications
Jack West
Bengisu Cagiltay
Shirley Zhang
Jingjie Li
Kassem Fawaz
Suman Banerjee
65
0
0
05 Mar 2025
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
Greta Warren
Irina Shklovski
Isabelle Augenstein
OffRL
75
4
0
13 Feb 2025
Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies
Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies
Sunnie S. Y. Kim
J. Vaughan
Q. V. Liao
Tania Lombrozo
Olga Russakovsky
99
5
0
12 Feb 2025
Evaluating the Influences of Explanation Style on Human-AI Reliance
Evaluating the Influences of Explanation Style on Human-AI Reliance
Emma Casolin
Flora D. Salim
Ben Newell
21
1
0
26 Oct 2024
Towards User-Focused Research in Training Data Attribution for
  Human-Centered Explainable AI
Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI
Elisa Nguyen
Johannes Bertram
Evgenii Kortukov
Jean Y. Song
Seong Joon Oh
TDI
374
2
0
25 Sep 2024
The Great AI Witch Hunt: Reviewers Perception and (Mis)Conception of
  Generative AI in Research Writing
The Great AI Witch Hunt: Reviewers Perception and (Mis)Conception of Generative AI in Research Writing
Hilda Hadan
Derrick M. Wang
Reza Hadi Mogavi
Joseph Tu
Leah Zhang-Kennedy
Lennart E. Nacke
36
7
0
27 Jun 2024
Scenarios and Approaches for Situated Natural Language Explanations
Scenarios and Approaches for Situated Natural Language Explanations
Pengshuo Qiu
Frank Rudzicz
Zining Zhu
LRM
38
0
0
07 Jun 2024
Resistance Against Manipulative AI: key factors and possible actions
Resistance Against Manipulative AI: key factors and possible actions
Piotr Wilczyñski
Wiktoria Mieleszczenko-Kowszewicz
P. Biecek
37
3
0
22 Apr 2024
Exploring Practitioner Perspectives On Training Data Attribution
  Explanations
Exploring Practitioner Perspectives On Training Data Attribution Explanations
Elisa Nguyen
Evgenii Kortukov
Jean Y. Song
Seong Joon Oh
TDI
381
1
0
31 Oct 2023
May I Ask a Follow-up Question? Understanding the Benefits of
  Conversations in Neural Network Explainability
May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability
Tong Zhang
X. J. Yang
Boyang Albert Li
23
3
0
25 Sep 2023
Science Communications for Explainable Artificial Intelligence
Science Communications for Explainable Artificial Intelligence
Simon Hudson
Matija Franklin
17
0
0
31 Aug 2023
Explaining the Arts: Toward a Framework for Matching Creative Tasks with
  Appropriate Explanation Mediums
Explaining the Arts: Toward a Framework for Matching Creative Tasks with Appropriate Explanation Mediums
M. Clemens
19
3
0
18 Aug 2023
The Co-12 Recipe for Evaluating Interpretable Part-Prototype Image
  Classifiers
The Co-12 Recipe for Evaluating Interpretable Part-Prototype Image Classifiers
Meike Nauta
Christin Seifert
31
11
0
26 Jul 2023
Identifying Explanation Needs of End-users: Applying and Extending the
  XAI Question Bank
Identifying Explanation Needs of End-users: Applying and Extending the XAI Question Bank
Lars Sipos
Ulrike Schäfer
Katrin Glinka
Claudia Muller-Birn
22
7
0
18 Jul 2023
`It is currently hodgepodge'': Examining AI/ML Practitioners' Challenges
  during Co-production of Responsible AI Values
`It is currently hodgepodge'': Examining AI/ML Practitioners' Challenges during Co-production of Responsible AI Values
R. Varanasi
Nitesh Goyal
31
46
0
14 Jul 2023
Unjustified Sample Sizes and Generalizations in Explainable AI Research:
  Principles for More Inclusive User Studies
Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies
Uwe Peters
Mary Carman
ELM
27
1
0
08 May 2023
Towards Feminist Intersectional XAI: From Explainability to
  Response-Ability
Towards Feminist Intersectional XAI: From Explainability to Response-Ability
Goda Klumbytė
Hannah Piehl
Claude Draude
22
3
0
05 May 2023
Explainability in AI Policies: A Critical Review of Communications,
  Reports, Regulations, and Standards in the EU, US, and UK
Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK
L. Nannini
Agathe Balayn
A. Smith
19
37
0
20 Apr 2023
Blaming Humans and Machines: What Shapes People's Reactions to
  Algorithmic Harm
Blaming Humans and Machines: What Shapes People's Reactions to Algorithmic Harm
Gabriel Lima
Nina Grgić-Hlavca
M. Cha
23
26
0
05 Apr 2023
XAIR: A Framework of Explainable AI in Augmented Reality
XAIR: A Framework of Explainable AI in Augmented Reality
Xuhai Xu
Anna Yu
Tanya R. Jonker
Kashyap Todi
Feiyu Lu
...
Narine Kokhlikyan
Fulton Wang
P. Sorenson
Sophie Kahyun Kim
Hrvoje Benko
31
49
0
28 Mar 2023
Helpful, Misleading or Confusing: How Humans Perceive Fundamental
  Building Blocks of Artificial Intelligence Explanations
Helpful, Misleading or Confusing: How Humans Perceive Fundamental Building Blocks of Artificial Intelligence Explanations
E. Small
Yueqing Xuan
Danula Hettiachchi
Kacper Sokol
21
10
0
02 Mar 2023
Charting the Sociotechnical Gap in Explainable AI: A Framework to
  Address the Gap in XAI
Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI
Upol Ehsan
Koustuv Saha
M. D. Choudhury
Mark O. Riedl
18
57
0
01 Feb 2023
Ignore, Trust, or Negotiate: Understanding Clinician Acceptance of
  AI-Based Treatment Recommendations in Health Care
Ignore, Trust, or Negotiate: Understanding Clinician Acceptance of AI-Based Treatment Recommendations in Health Care
Venkatesh Sivaraman
L. Bukowski
J. Levin
J. Kahn
Adam Perer
27
81
0
31 Jan 2023
Selective Explanations: Leveraging Human Input to Align Explainable AI
Selective Explanations: Leveraging Human Input to Align Explainable AI
Vivian Lai
Yiming Zhang
Chacha Chen
Q. V. Liao
Chenhao Tan
18
43
0
23 Jan 2023
Understanding the Role of Human Intuition on Reliance in Human-AI
  Decision-Making with Explanations
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Valerie Chen
Q. V. Liao
Jennifer Wortman Vaughan
Gagan Bansal
38
104
0
18 Jan 2023
Social Construction of XAI: Do We Need One Definition to Rule Them All?
Social Construction of XAI: Do We Need One Definition to Rule Them All?
Upol Ehsan
Mark O. Riedl
9
8
0
11 Nov 2022
The Influence of Explainable Artificial Intelligence: Nudging Behaviour
  or Boosting Capability?
The Influence of Explainable Artificial Intelligence: Nudging Behaviour or Boosting Capability?
Matija Franklin
TDI
21
1
0
05 Oct 2022
On the Influence of Cognitive Styles on Users' Understanding of
  Explanations
On the Influence of Cognitive Styles on Users' Understanding of Explanations
Lara Riefle
Patrick Hemmer
Carina Benz
Michael Vossing
Jannik Pries
18
5
0
05 Oct 2022
A Mixed-Methods Analysis of the Algorithm-Mediated Labor of Online Food
  Deliverers in China
A Mixed-Methods Analysis of the Algorithm-Mediated Labor of Online Food Deliverers in China
Zhilong Chen
Xiaochong Lan
J. Piao
Yunke Zhang
Yong Li
23
9
0
09 Aug 2022
Humans are not Boltzmann Distributions: Challenges and Opportunities for
  Modelling Human Feedback and Interaction in Reinforcement Learning
Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback and Interaction in Reinforcement Learning
David Lindner
Mennatallah El-Assady
OffRL
25
16
0
27 Jun 2022
Mediators: Conversational Agents Explaining NLP Model Behavior
Mediators: Conversational Agents Explaining NLP Model Behavior
Nils Feldhus
A. Ravichandran
Sebastian Möller
27
16
0
13 Jun 2022
Think About the Stakeholders First! Towards an Algorithmic Transparency
  Playbook for Regulatory Compliance
Think About the Stakeholders First! Towards an Algorithmic Transparency Playbook for Regulatory Compliance
Andrew Bell
O. Nov
Julia Stoyanovich
27
26
0
10 Jun 2022
The Conflict Between Explainable and Accountable Decision-Making
  Algorithms
The Conflict Between Explainable and Accountable Decision-Making Algorithms
Gabriel Lima
Nina Grgić-Hlavca
Jin Keun Jeong
M. Cha
11
37
0
11 May 2022
Sensible AI: Re-imagining Interpretability and Explainability using
  Sensemaking Theory
Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory
Harmanpreet Kaur
Eytan Adar
Eric Gilbert
Cliff Lampe
8
59
0
10 May 2022
Let's Go to the Alien Zoo: Introducing an Experimental Framework to
  Study Usability of Counterfactual Explanations for Machine Learning
Let's Go to the Alien Zoo: Introducing an Experimental Framework to Study Usability of Counterfactual Explanations for Machine Learning
Ulrike Kuhl
André Artelt
Barbara Hammer
27
17
0
06 May 2022
Tell Me Something That Will Help Me Trust You: A Survey of Trust
  Calibration in Human-Agent Interaction
Tell Me Something That Will Help Me Trust You: A Survey of Trust Calibration in Human-Agent Interaction
G. Cancro
Shimei Pan
James R. Foulds
23
2
0
06 May 2022
The Risks of Machine Learning Systems
The Risks of Machine Learning Systems
Samson Tan
Araz Taeihagh
K. Baxter
4
5
0
21 Apr 2022
Human Interpretation of Saliency-based Explanation Over Text
Human Interpretation of Saliency-based Explanation Over Text
Hendrik Schuff
Alon Jacovi
Heike Adel
Yoav Goldberg
Ngoc Thang Vu
MILM
XAI
FAtt
144
39
0
27 Jan 2022
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Alon Jacovi
Jasmijn Bastings
Sebastian Gehrmann
Yoav Goldberg
Katja Filippova
36
15
0
27 Jan 2022
Explainability Pitfalls: Beyond Dark Patterns in Explainable AI
Explainability Pitfalls: Beyond Dark Patterns in Explainable AI
Upol Ehsan
Mark O. Riedl
XAI
SILM
56
58
0
26 Sep 2021
Explainable Activity Recognition for Smart Home Systems
Explainable Activity Recognition for Smart Home Systems
Devleena Das
Yasutaka Nishimura
R. Vivek
Naoto Takeda
Sean T. Fish
Thomas Ploetz
Sonia Chernova
20
40
0
20 May 2021
Trust in Data Science: Collaboration, Translation, and Accountability in
  Corporate Data Science Projects
Trust in Data Science: Collaboration, Translation, and Accountability in Corporate Data Science Projects
Samir Passi
S. Jackson
168
108
0
09 Feb 2020
Data Vision: Learning to See Through Algorithmic Abstraction
Data Vision: Learning to See Through Algorithmic Abstraction
Samir Passi
S. Jackson
137
111
0
09 Feb 2020
How to Support Users in Understanding Intelligent Systems? Structuring
  the Discussion
How to Support Users in Understanding Intelligent Systems? Structuring the Discussion
Malin Eiband
Daniel Buschek
H. Hussmann
39
28
0
22 Jan 2020
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
251
3,683
0
28 Feb 2017
Effective Approaches to Attention-based Neural Machine Translation
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
218
7,923
0
17 Aug 2015
1