Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2001.08298
Cited By
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems
22 January 2020
Zana Buçinca
Phoebe Lin
Krzysztof Z. Gajos
Elena L. Glassman
ELM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems"
50 / 58 papers shown
Title
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions
Somayeh Molaei
Lionel P. Robert
Nikola Banovic
31
0
0
09 May 2025
Exploring the Impact of Explainable AI and Cognitive Capabilities on Users' Decisions
Federico Maria Cau
Lucio Davide Spano
31
0
0
02 May 2025
Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies
Sunnie S. Y. Kim
J. Vaughan
Q. V. Liao
Tania Lombrozo
Olga Russakovsky
112
5
0
12 Feb 2025
The Value of Information in Human-AI Decision-making
Ziyang Guo
Yifan Wu
Jason D. Hartline
Jessica Hullman
FAtt
64
0
0
10 Feb 2025
Fine-Grained Appropriate Reliance: Human-AI Collaboration with a Multi-Step Transparent Decision Workflow for Complex Task Decomposition
Gaole He
Patrick Hemmer
Michael Vossing
Max Schemmer
U. Gadiraju
53
0
0
19 Jan 2025
Personalized Help for Optimizing Low-Skilled Users' Strategy
Feng Gu
Wichayaporn Wongkamjan
Jordan Lee Boyd-Graber
Jonathan K. Kummerfeld
Denis Peskoff
Jonathan May
36
0
0
14 Nov 2024
Unexploited Information Value in Human-AI Collaboration
Ziyang Guo
Yifan Wu
Jason D. Hartline
Jessica Hullman
36
1
0
03 Nov 2024
Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Zana Buçinca
S. Swaroop
Amanda E. Paluch
Finale Doshi-Velez
Krzysztof Z. Gajos
56
2
0
05 Oct 2024
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
47
13
0
27 Jul 2024
Whether to trust: the ML leap of faith
Tory Frame
Sahraoui Dhelim
George Stothart
E. Coulthard
46
0
0
17 Jul 2024
Participation in the age of foundation models
Harini Suresh
Emily Tseng
Meg Young
Mary L. Gray
Emma Pierson
Karen Levy
48
20
0
29 May 2024
Data Science Principles for Interpretable and Explainable AI
Kris Sankaran
FaML
50
0
0
17 May 2024
Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making
Shuai Ma
Qiaoyi Chen
Xinru Wang
Chengbo Zheng
Zhenhui Peng
Ming Yin
Xiaojuan Ma
ELM
42
20
0
25 Mar 2024
"Are You Really Sure?" Understanding the Effects of Human Self-Confidence Calibration in AI-Assisted Decision Making
Shuai Ma
Xinru Wang
Ying Lei
Chuhan Shi
Ming Yin
Xiaojuan Ma
34
24
0
14 Mar 2024
Overconfident and Unconfident AI Hinder Human-AI Collaboration
Jingshu Li
Yitian Yang
Renwen Zhang
Yi-Chieh Lee
45
1
0
12 Feb 2024
Towards a Comprehensive Human-Centred Evaluation Framework for Explainable AI
Ivania Donoso-Guzmán
Jeroen Ooge
Denis Parra
K. Verbert
53
6
0
31 Jul 2023
In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making
Raymond Fok
Daniel S. Weld
34
61
0
12 May 2023
Impact Of Explainable AI On Cognitive Load: Insights From An Empirical Study
L. Herm
26
22
0
18 Apr 2023
Learning Personalized Decision Support Policies
Umang Bhatt
Valerie Chen
Katherine M. Collins
Parameswaran Kamalaruban
Emma Kallina
Adrian Weller
Ameet Talwalkar
OffRL
56
10
0
13 Apr 2023
Distrust in (X)AI -- Measurement Artifact or Distinct Construct?
Nicolas Scharowski
S. Perrig
HILM
24
3
0
29 Mar 2023
Human-AI Collaboration: The Effect of AI Delegation on Human Task Performance and Task Satisfaction
Patrick Hemmer
Monika Westphal
Max Schemmer
S. Vetter
Michael Vossing
G. Satzger
54
42
0
16 Mar 2023
Who's Thinking? A Push for Human-Centered Evaluation of LLMs using the XAI Playbook
Teresa Datta
John P. Dickerson
36
10
0
10 Mar 2023
Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI
Upol Ehsan
Koustuv Saha
M. D. Choudhury
Mark O. Riedl
26
57
0
01 Feb 2023
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Valerie Chen
Q. V. Liao
Jennifer Wortman Vaughan
Gagan Bansal
49
105
0
18 Jan 2023
Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ
Eoin Delaney
A. Pakrashi
Derek Greene
Markt. Keane
40
16
0
16 Dec 2022
Responsibility: An Example-based Explainable AI approach via Training Process Inspection
Faraz Khadivpour
Arghasree Banerjee
Matthew J. Guzdial
XAI
19
2
0
07 Sep 2022
Advancing Human-AI Complementarity: The Impact of User Expertise and Algorithmic Tuning on Joint Decision Making
K. Inkpen
Shreya Chappidi
Keri Mallari
Besmira Nushi
Divya Ramesh
Pietro Michelucci
Vani Mandava
Libuvse Hannah Vepvrek
Gabrielle Quinn
36
46
0
16 Aug 2022
Use-Case-Grounded Simulations for Explanation Evaluation
Valerie Chen
Nari Johnson
Nicholay Topin
Gregory Plumb
Ameet Talwalkar
FAtt
ELM
27
24
0
05 Jun 2022
A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making
Max Schemmer
Patrick Hemmer
Maximilian Nitsche
Niklas Kühl
Michael Vossing
32
56
0
10 May 2022
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
45
78
0
06 May 2022
Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation
Vivian Lai
Samuel Carton
Rajat Bhatnagar
Vera Liao
Yunfeng Zhang
Chenhao Tan
39
130
0
25 Apr 2022
Towards Involving End-users in Interactive Human-in-the-loop AI Fairness
Yuri Nakao
Simone Stumpf
Subeida Ahmed
A. Naseer
Lorenzo Strappelli
29
34
0
22 Apr 2022
Features of Explainability: How users understand counterfactual and causal explanations for categorical and continuous features in XAI
Greta Warren
Mark T. Keane
R. Byrne
CML
27
22
0
21 Apr 2022
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Alon Jacovi
Jasmijn Bastings
Sebastian Gehrmann
Yoav Goldberg
Katja Filippova
41
15
0
27 Jan 2022
Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies
Vivian Lai
Chacha Chen
Q. V. Liao
Alison Smith-Renner
Chenhao Tan
33
186
0
21 Dec 2021
Teaching Humans When To Defer to a Classifier via Exemplars
Hussein Mozannar
Arvindmani Satyanarayan
David Sontag
38
43
0
22 Nov 2021
Visual Intelligence through Human Interaction
Ranjay Krishna
Mitchell L. Gordon
Fei-Fei Li
Michael S. Bernstein
29
8
0
12 Nov 2021
An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability
Francesco Sovrano
F. Vitali
42
30
0
11 Sep 2021
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
Upol Ehsan
Samir Passi
Q. V. Liao
Larry Chan
I-Hsiang Lee
Michael J. Muller
Mark O. Riedl
32
86
0
28 Jul 2021
A Framework for Evaluating Post Hoc Feature-Additive Explainers
Zachariah Carmichael
Walter J. Scheirer
FAtt
51
4
0
15 Jun 2021
Explainable Machine Learning with Prior Knowledge: An Overview
Katharina Beckh
Sebastian Müller
Matthias Jakobs
Vanessa Toborek
Hanxiao Tan
Raphael Fischer
Pascal Welke
Sebastian Houben
Laura von Rueden
XAI
27
28
0
21 May 2021
White Box Methods for Explanations of Convolutional Neural Networks in Image Classification Tasks
Meghna P. Ayyar
J. Benois-Pineau
A. Zemmari
FAtt
30
17
0
06 Apr 2021
Designing for human-AI complementarity in K-12 education
Kenneth Holstein
V. Aleven
HAI
11
64
0
02 Apr 2021
Assessing the Impact of Automated Suggestions on Decision Making: Domain Experts Mediate Model Errors but Take Less Initiative
A. Levy
Monica Agrawal
Arvind Satyanarayan
David Sontag
8
75
0
08 Mar 2021
If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques
Mark T. Keane
Eoin M. Kenny
Eoin Delaney
Barry Smyth
CML
29
146
0
26 Feb 2021
To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making
Zana Buçinca
M. Malaya
Krzysztof Z. Gajos
28
301
0
19 Feb 2021
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
FAtt
45
25
0
17 Feb 2021
Designing AI for Trust and Collaboration in Time-Constrained Medical Decisions: A Sociotechnical Lens
Maia L. Jacobs
Jeffrey He
Melanie F. Pradier
Barbara D. Lam
Andrew C Ahn
T. McCoy
R. Perlis
Finale Doshi-Velez
Krzysztof Z. Gajos
49
145
0
01 Feb 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
34
126
0
24 Jan 2021
Expanding Explainability: Towards Social Transparency in AI systems
Upol Ehsan
Q. V. Liao
Michael J. Muller
Mark O. Riedl
Justin D. Weisz
43
394
0
12 Jan 2021
1
2
Next