ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.11839
  4. Cited By
A Multidisciplinary Survey and Framework for Design and Evaluation of
  Explainable AI Systems

A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems

28 November 2018
Sina Mohseni
Niloofar Zarei
Eric D. Ragan
ArXivPDFHTML

Papers citing "A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems"

19 / 19 papers shown
Title
A Review on Explainable Artificial Intelligence for Healthcare: Why,
  How, and When?
A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?
M. Rubaiyat
Hossain Mondal
Prajoy Podder
26
56
0
10 Apr 2023
Are Metrics Enough? Guidelines for Communicating and Visualizing Predictive Models to Subject Matter Experts
Are Metrics Enough? Guidelines for Communicating and Visualizing Predictive Models to Subject Matter Experts
Ashley Suh
G. Appleby
Erik W. Anderson
Luca A. Finelli
Remco Chang
Dylan Cashman
35
8
0
11 May 2022
Explainable Misinformation Detection Across Multiple Social Media
  Platforms
Explainable Misinformation Detection Across Multiple Social Media Platforms
Gargi Joshi
Ananya Srivastava
Bhargav D. Yagnik
Md Musleh Uddin Hasan
Zainuddin Saiyed
Lubna A Gabralla
Ajith Abraham
Rahee Walambe
K. Kotecha
31
17
0
20 Mar 2022
Exploring The Role of Local and Global Explanations in Recommender
  Systems
Exploring The Role of Local and Global Explanations in Recommender Systems
Marissa Radensky
Doug Downey
Kyle Lo
Z. Popović
Daniel S. Weld University of Washington
LRM
13
20
0
27 Sep 2021
Some Critical and Ethical Perspectives on the Empirical Turn of AI
  Interpretability
Some Critical and Ethical Perspectives on the Empirical Turn of AI Interpretability
Jean-Marie John-Mathews
45
34
0
20 Sep 2021
Temporal Dependencies in Feature Importance for Time Series Predictions
Temporal Dependencies in Feature Importance for Time Series Predictions
Kin Kwan Leung
Clayton Rooke
Jonathan Smith
S. Zuberi
M. Volkovs
OOD
AI4TS
31
24
0
29 Jul 2021
Evaluating the Correctness of Explainable AI Algorithms for
  Classification
Evaluating the Correctness of Explainable AI Algorithms for Classification
Orcun Yalcin
Xiuyi Fan
Siyuan Liu
XAI
FAtt
16
15
0
20 May 2021
Dissonance Between Human and Machine Understanding
Dissonance Between Human and Machine Understanding
Zijian Zhang
Jaspreet Singh
U. Gadiraju
Avishek Anand
51
74
0
18 Jan 2021
Soliciting Human-in-the-Loop User Feedback for Interactive Machine
  Learning Reduces User Trust and Impressions of Model Accuracy
Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy
Donald R. Honeycutt
Mahsan Nourani
Eric D. Ragan
HAI
22
61
0
28 Aug 2020
The Role of Domain Expertise in User Trust and the Impact of First
  Impressions with Intelligent Systems
The Role of Domain Expertise in User Trust and the Impact of First Impressions with Intelligent Systems
Mahsan Nourani
J. King
Eric D. Ragan
25
98
0
20 Aug 2020
An Adversarial Approach for Explaining the Predictions of Deep Neural
  Networks
An Adversarial Approach for Explaining the Predictions of Deep Neural Networks
Arash Rahnama
A.-Yu Tseng
FAtt
AAML
FaML
17
5
0
20 May 2020
Don't Explain without Verifying Veracity: An Evaluation of Explainable
  AI with Video Activity Recognition
Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition
Mahsan Nourani
Chiradeep Roy
Tahrima Rahman
Eric D. Ragan
Nicholas Ruozzi
Vibhav Gogate
AAML
15
17
0
05 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
41
371
0
30 Apr 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
52
702
0
08 Jan 2020
An explanation method for Siamese neural networks
An explanation method for Siamese neural networks
Lev V. Utkin
M. Kovalev
E. Kasimov
19
14
0
18 Nov 2019
Consistent Feature Construction with Constrained Genetic Programming for
  Experimental Physics
Consistent Feature Construction with Constrained Genetic Programming for Experimental Physics
Noëlie Cherrier
Jean-Philippe Poli
Maxime Defurne
F. Sabatié
AI4CE
20
13
0
17 Aug 2019
Leveraging Latent Features for Local Explanations
Leveraging Latent Features for Local Explanations
Ronny Luss
Pin-Yu Chen
Amit Dhurandhar
P. Sattigeri
Yunfeng Zhang
Karthikeyan Shanmugam
Chun-Chen Tu
FAtt
46
37
0
29 May 2019
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,090
0
24 Oct 2016
1