Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2108.01737
Cited By
Roadmap of Designing Cognitive Metrics for Explainable Artificial Intelligence (XAI)
20 July 2021
J. H. Hsiao
H. Ngai
Luyu Qiu
Yi Yang
Caleb Chen Cao
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Roadmap of Designing Cognitive Metrics for Explainable Artificial Intelligence (XAI)"
18 / 18 papers shown
Title
Quantitative Evaluations on Saliency Methods: An Experimental Study
Xiao-hui Li
Yuhan Shi
Haoyang Li
Wei Bai
Yuanwei Song
Caleb Chen Cao
Lei Chen
FAtt
XAI
82
18
0
31 Dec 2020
On Controllability of AI
Roman V. Yampolskiy
39
14
0
19 Jul 2020
Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?
Peter Hase
Joey Tianyi Zhou
FAtt
70
302
0
04 May 2020
The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
Andrés Páez
64
196
0
22 Feb 2020
Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
AAML
FAtt
XAI
59
200
0
03 Feb 2020
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
107
715
0
08 Jan 2020
Measuring the Quality of Explanations: The System Causability Scale (SCS). Comparing Human and Machine Explanations
Andreas Holzinger
André M. Carrington
Heimo Muller
LRM
XAI
ELM
66
308
0
19 Dec 2019
Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI
Shane T. Mueller
R. Hoffman
W. Clancey
Abigail Emrey
Gary Klein
XAI
47
286
0
05 Feb 2019
Explainable artificial intelligence (XAI), the goodness criteria and the grasp-ability test
Tae Wan Kim
XAI
25
15
0
22 Oct 2018
Layerwise Perturbation-Based Adversarial Training for Hard Drive Health Degree Prediction
Jianguo Zhang
Ji Wang
Lifang He
Zhao Li
Philip S. Yu
49
31
0
11 Sep 2018
xGEMs: Generating Examplars to Explain Black-Box Models
Shalmali Joshi
Oluwasanmi Koyejo
Been Kim
Joydeep Ghosh
MLAU
44
40
0
22 Jun 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
189
1,834
0
30 Nov 2017
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
236
4,249
0
22 Jun 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAtt
AAML
74
1,517
0
11 Apr 2017
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
154
3,685
0
10 Jun 2016
Generating Visual Explanations
Lisa Anne Hendricks
Zeynep Akata
Marcus Rohrbach
Jeff Donahue
Bernt Schiele
Trevor Darrell
VLM
FAtt
81
618
0
28 Mar 2016
Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to the iCub's answers
I. Gaudiello
E. Zibetti
S. Lefort
Mohamed Chetouani
S. Ivaldi
24
184
0
13 Oct 2015
DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition
Jeff Donahue
Yangqing Jia
Oriol Vinyals
Judy Hoffman
Ning Zhang
Eric Tzeng
Trevor Darrell
VLM
ObjD
176
4,948
0
06 Oct 2013
1