ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.06428
  4. Cited By
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions

What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions

9 May 2025
Somayeh Molaei
Lionel P. Robert
Nikola Banovic
ArXivPDFHTML

Papers citing "What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions"

19 / 19 papers shown
Title
Is Conversational XAI All You Need? Human-AI Decision Making With a Conversational XAI Assistant
Is Conversational XAI All You Need? Human-AI Decision Making With a Conversational XAI Assistant
Gaole He
Nilay Aishwarya
U. Gadiraju
68
9
0
29 Jan 2025
The Impact of Imperfect XAI on Human-AI Decision-Making
The Impact of Imperfect XAI on Human-AI Decision-Making
Katelyn Morrison
Philipp Spitzer
Violet Turri
Michelle C. Feng
Niklas Kühl
Adam Perer
61
35
0
25 Jul 2023
Advancing Human-AI Complementarity: The Impact of User Expertise and
  Algorithmic Tuning on Joint Decision Making
Advancing Human-AI Complementarity: The Impact of User Expertise and Algorithmic Tuning on Joint Decision Making
K. Inkpen
Shreya Chappidi
Keri Mallari
Besmira Nushi
Divya Ramesh
Pietro Michelucci
Vani Mandava
Libuvse Hannah Vepvrek
Gabrielle Quinn
57
47
0
16 Aug 2022
"If it didn't happen, why would I change my decision?": How Judges
  Respond to Counterfactual Explanations for the Public Safety Assessment
"If it didn't happen, why would I change my decision?": How Judges Respond to Counterfactual Explanations for the Public Safety Assessment
Yaniv Yacoby
Ben Green
Christopher L. Griffin
Finale Doshi Velez
52
17
0
11 May 2022
Sensible AI: Re-imagining Interpretability and Explainability using
  Sensemaking Theory
Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory
Harmanpreet Kaur
Eytan Adar
Eric Gilbert
Cliff Lampe
41
59
0
10 May 2022
Do People Engage Cognitively with AI? Impact of AI Assistance on
  Incidental Learning
Do People Engage Cognitively with AI? Impact of AI Assistance on Incidental Learning
Krzysztof Z. Gajos
Lena Mamykina
57
103
0
11 Feb 2022
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
Upol Ehsan
Samir Passi
Q. V. Liao
Larry Chan
I-Hsiang Lee
Michael J. Muller
Mark O. Riedl
59
88
0
28 Jul 2021
An Aligned Rank Transform Procedure for Multifactor Contrast Tests
An Aligned Rank Transform Procedure for Multifactor Contrast Tests
Lisa Elkin
Matthew Kay
J. J. Higgins
J. Wobbrock
32
456
0
23 Feb 2021
How Useful Are the Machine-Generated Interpretations to General Users? A
  Human Evaluation on Guessing the Incorrectly Predicted Labels
How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels
Hua Shen
Ting-Hao 'Kenneth' Huang
FAtt
HAI
54
56
0
26 Aug 2020
Evaluating Saliency Map Explanations for Convolutional Neural Networks:
  A User Study
Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
AAML
FAtt
XAI
61
200
0
03 Feb 2020
"How do I fool you?": Manipulating User Trust via Misleading Black Box
  Explanations
"How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations
Himabindu Lakkaraju
Osbert Bastani
56
255
0
15 Nov 2019
Towards Explainable Artificial Intelligence
Towards Explainable Artificial Intelligence
Wojciech Samek
K. Müller
XAI
63
442
0
26 Sep 2019
The What-If Tool: Interactive Probing of Machine Learning Models
The What-If Tool: Interactive Probing of Machine Learning Models
James Wexler
Mahima Pushkarna
Tolga Bolukbasi
Martin Wattenberg
F. Viégas
Jimbo Wilson
VLM
79
491
0
09 Jul 2019
Model-Agnostic Counterfactual Explanations for Consequential Decisions
Model-Agnostic Counterfactual Explanations for Consequential Decisions
Amir-Hossein Karimi
Gilles Barthe
Borja Balle
Isabel Valera
91
321
0
27 May 2019
The Challenge of Crafting Intelligible Intelligence
The Challenge of Crafting Intelligible Intelligence
Daniel S. Weld
Gagan Bansal
49
244
0
09 Mar 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
122
3,938
0
06 Feb 2018
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
239
4,249
0
22 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.0K
21,815
0
22 May 2017
Generating Visual Explanations
Generating Visual Explanations
Lisa Anne Hendricks
Zeynep Akata
Marcus Rohrbach
Jeff Donahue
Bernt Schiele
Trevor Darrell
VLM
FAtt
81
618
0
28 Mar 2016
1