Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1811.11839
Cited By
A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems
28 November 2018
Sina Mohseni
Niloofar Zarei
Eric D. Ragan
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems"
19 / 19 papers shown
Title
A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?
M. Rubaiyat
Hossain Mondal
Prajoy Podder
23
56
0
10 Apr 2023
Are Metrics Enough? Guidelines for Communicating and Visualizing Predictive Models to Subject Matter Experts
Ashley Suh
G. Appleby
Erik W. Anderson
Luca A. Finelli
Remco Chang
Dylan Cashman
32
8
0
11 May 2022
Explainable Misinformation Detection Across Multiple Social Media Platforms
Gargi Joshi
Ananya Srivastava
Bhargav D. Yagnik
Md Musleh Uddin Hasan
Zainuddin Saiyed
Lubna A Gabralla
Ajith Abraham
Rahee Walambe
K. Kotecha
31
17
0
20 Mar 2022
Exploring The Role of Local and Global Explanations in Recommender Systems
Marissa Radensky
Doug Downey
Kyle Lo
Z. Popović
Daniel S. Weld University of Washington
LRM
13
20
0
27 Sep 2021
Some Critical and Ethical Perspectives on the Empirical Turn of AI Interpretability
Jean-Marie John-Mathews
45
33
0
20 Sep 2021
Temporal Dependencies in Feature Importance for Time Series Predictions
Kin Kwan Leung
Clayton Rooke
Jonathan Smith
S. Zuberi
M. Volkovs
OOD
AI4TS
28
24
0
29 Jul 2021
Evaluating the Correctness of Explainable AI Algorithms for Classification
Orcun Yalcin
Xiuyi Fan
Siyuan Liu
XAI
FAtt
16
15
0
20 May 2021
Dissonance Between Human and Machine Understanding
Zijian Zhang
Jaspreet Singh
U. Gadiraju
Avishek Anand
51
74
0
18 Jan 2021
Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy
Donald R. Honeycutt
Mahsan Nourani
Eric D. Ragan
HAI
16
61
0
28 Aug 2020
The Role of Domain Expertise in User Trust and the Impact of First Impressions with Intelligent Systems
Mahsan Nourani
J. King
Eric D. Ragan
19
98
0
20 Aug 2020
An Adversarial Approach for Explaining the Predictions of Deep Neural Networks
Arash Rahnama
A.-Yu Tseng
FAtt
AAML
FaML
17
5
0
20 May 2020
Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition
Mahsan Nourani
Chiradeep Roy
Tahrima Rahman
Eric D. Ragan
Nicholas Ruozzi
Vibhav Gogate
AAML
12
17
0
05 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
38
370
0
30 Apr 2020
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
52
702
0
08 Jan 2020
An explanation method for Siamese neural networks
Lev V. Utkin
M. Kovalev
E. Kasimov
14
14
0
18 Nov 2019
Consistent Feature Construction with Constrained Genetic Programming for Experimental Physics
Noëlie Cherrier
Jean-Philippe Poli
Maxime Defurne
F. Sabatié
AI4CE
15
13
0
17 Aug 2019
Leveraging Latent Features for Local Explanations
Ronny Luss
Pin-Yu Chen
Amit Dhurandhar
P. Sattigeri
Yunfeng Zhang
Karthikeyan Shanmugam
Chun-Chen Tu
FAtt
43
37
0
29 May 2019
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,087
0
24 Oct 2016
1