Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2007.12248
Cited By
Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction
23 July 2020
Eric Chu
D. Roy
Jacob Andreas
FAtt
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction"
14 / 14 papers shown
Title
Can Interpretability Layouts Influence Human Perception of Offensive Sentences?
Thiago Freitas dos Santos
Nardine Osman
Marco Schorlemmer
24
0
0
01 Mar 2024
Improving Human-AI Collaboration With Descriptions of AI Behavior
Ángel Alexander Cabrera
Adam Perer
Jason I. Hong
35
34
0
06 Jan 2023
On the Relationship Between Explanation and Prediction: A Causal View
Amir-Hossein Karimi
Krikamol Muandet
Simon Kornblith
Bernhard Schölkopf
Been Kim
FAtt
CML
37
14
0
13 Dec 2022
Visual correspondence-based explanations improve AI robustness and human-AI team accuracy
Giang Nguyen
Mohammad Reza Taesiri
Anh Totti Nguyen
30
42
0
26 Jul 2022
A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making
Max Schemmer
Patrick Hemmer
Maximilian Nitsche
Niklas Kühl
Michael Vossing
24
56
0
10 May 2022
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And Dataset
Leon Sixt
M. Schuessler
Oana-Iuliana Popescu
Philipp Weiß
Tim Landgraf
FAtt
29
14
0
25 Apr 2022
Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations
Siddhant Arora
Danish Pruthi
Norman M. Sadeh
William W. Cohen
Zachary Chase Lipton
Graham Neubig
FAtt
40
38
0
17 Dec 2021
Teaching Humans When To Defer to a Classifier via Exemplars
Hussein Mozannar
Arvindmani Satyanarayan
David Sontag
36
43
0
22 Nov 2021
Intelligent Decision Assistance Versus Automated Decision-Making: Enhancing Knowledge Work Through Explainable Artificial Intelligence
Max Schemmer
Niklas Kühl
G. Satzger
16
13
0
28 Sep 2021
Exploring The Role of Local and Global Explanations in Recommender Systems
Marissa Radensky
Doug Downey
Kyle Lo
Z. Popović
Daniel S. Weld University of Washington
LRM
13
20
0
27 Sep 2021
Explaining the Road Not Taken
Hua Shen
Ting-Hao 'Kenneth' Huang
FAtt
XAI
27
9
0
27 Mar 2021
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAML
FAtt
23
57
0
25 Feb 2021
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
39
7
0
23 Oct 2020
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,690
0
28 Feb 2017
1