ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.11194
  4. Cited By
Does Explainable Artificial Intelligence Improve Human Decision-Making?

Does Explainable Artificial Intelligence Improve Human Decision-Making?

19 June 2020
Y. Alufaisan
L. Marusich
J. Bakdash
Yan Zhou
Murat Kantarcioglu
    XAI
ArXivPDFHTML

Papers citing "Does Explainable Artificial Intelligence Improve Human Decision-Making?"

12 / 12 papers shown
Title
Graphical Perception of Saliency-based Model Explanations
Graphical Perception of Saliency-based Model Explanations
Yayan Zhao
Mingwei Li
Matthew Berger
XAI
FAtt
46
2
0
11 Jun 2024
In Search of Verifiability: Explanations Rarely Enable Complementary
  Performance in AI-Advised Decision Making
In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making
Raymond Fok
Daniel S. Weld
24
61
0
12 May 2023
Human-AI Collaboration: The Effect of AI Delegation on Human Task
  Performance and Task Satisfaction
Human-AI Collaboration: The Effect of AI Delegation on Human Task Performance and Task Satisfaction
Patrick Hemmer
Monika Westphal
Max Schemmer
S. Vetter
Michael Vossing
G. Satzger
44
42
0
16 Mar 2023
Superhuman Artificial Intelligence Can Improve Human Decision Making by
  Increasing Novelty
Superhuman Artificial Intelligence Can Improve Human Decision Making by Increasing Novelty
Minkyu Shin
Jin Kim
B. V. Opheusden
Thomas L. Griffiths
24
46
0
13 Mar 2023
SpecXAI -- Spectral interpretability of Deep Learning Models
SpecXAI -- Spectral interpretability of Deep Learning Models
Stefan Druc
Peter Wooldridge
A. Krishnamurthy
S. Sarkar
Aditya Balu
22
0
0
20 Feb 2023
Understanding User Preferences in Explainable Artificial Intelligence: A
  Survey and a Mapping Function Proposal
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal
M. Hashemi
Ali Darejeh
Francisco Cruz
40
3
0
07 Feb 2023
Appropriate Reliance on AI Advice: Conceptualization and the Effect of
  Explanations
Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations
Max Schemmer
Niklas Kühl
Carina Benz
Andrea Bartos
G. Satzger
19
97
0
04 Feb 2023
Explanations, Fairness, and Appropriate Reliance in Human-AI
  Decision-Making
Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Jakob Schoeffer
Maria De-Arteaga
Niklas Kuehl
FaML
45
46
0
23 Sep 2022
A Meta-Analysis of the Utility of Explainable Artificial Intelligence in
  Human-AI Decision-Making
A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making
Max Schemmer
Patrick Hemmer
Maximilian Nitsche
Niklas Kühl
Michael Vossing
19
55
0
10 May 2022
Pitfalls of Explainable ML: An Industry Perspective
Pitfalls of Explainable ML: An Industry Perspective
Sahil Verma
Aditya Lahiri
John P. Dickerson
Su-In Lee
XAI
16
9
0
14 Jun 2021
Exemplary Natural Images Explain CNN Activations Better than
  State-of-the-Art Feature Visualization
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
36
7
0
23 Oct 2020
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
251
3,683
0
28 Feb 2017
1