ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.07374
  4. Cited By
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical
  XAI

A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI

17 July 2019
Erico Tjoa
Cuntai Guan
    XAI
ArXivPDFHTML

Papers citing "A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI"

35 / 135 papers shown
Title
Consistent Explanations by Contrastive Learning
Consistent Explanations by Contrastive Learning
Vipin Pillai
Soroush Abbasi Koohpayegani
Ashley Ouligian
Dennis Fong
Hamed Pirsiavash
FAtt
20
21
0
01 Oct 2021
Discovery of New Multi-Level Features for Domain Generalization via
  Knowledge Corruption
Discovery of New Multi-Level Features for Domain Generalization via Knowledge Corruption
A. Frikha
Denis Krompass
Volker Tresp
OOD
35
1
0
09 Sep 2021
This looks more like that: Enhancing Self-Explaining Models by
  Prototypical Relevance Propagation
This looks more like that: Enhancing Self-Explaining Models by Prototypical Relevance Propagation
Srishti Gautam
Marina M.-C. Höhne
Stine Hansen
Robert Jenssen
Michael C. Kampffmeyer
27
49
0
27 Aug 2021
A Comparison of Deep Saliency Map Generators on Multispectral Data in
  Object Detection
A Comparison of Deep Saliency Map Generators on Multispectral Data in Object Detection
Jens Bayer
David Munch
Michael Arens
3DPC
30
3
0
26 Aug 2021
Explaining Bayesian Neural Networks
Explaining Bayesian Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Adelaida Creosteanu
Klaus-Robert Muller
Frederick Klauschen
Shinichi Nakajima
Marius Kloft
BDL
AAML
34
25
0
23 Aug 2021
Improvement of a Prediction Model for Heart Failure Survival through
  Explainable Artificial Intelligence
Improvement of a Prediction Model for Heart Failure Survival through Explainable Artificial Intelligence
Pedro A. Moreno-Sánchez
26
32
0
20 Aug 2021
Knowledge-intensive Language Understanding for Explainable AI
Knowledge-intensive Language Understanding for Explainable AI
A. Sheth
Manas Gaur
Kaushik Roy
Keyur Faldu
19
48
0
02 Aug 2021
Explainable AI, but explainable to whom?
Explainable AI, but explainable to whom?
Julie Gerlings
Millie Søndergaard Jensen
Arisa Shollo
35
43
0
10 Jun 2021
Explaining Time Series Predictions with Dynamic Masks
Explaining Time Series Predictions with Dynamic Masks
Jonathan Crabbé
M. Schaar
FAtt
AI4TS
23
80
0
09 Jun 2021
A Review on Explainability in Multimodal Deep Neural Nets
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
29
139
0
17 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
29
184
0
15 May 2021
Pervasive AI for IoT applications: A Survey on Resource-efficient
  Distributed Artificial Intelligence
Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence
Emna Baccour
N. Mhaisen
A. Abdellatif
A. Erbad
Amr M. Mohamed
Mounir Hamdi
Mohsen Guizani
28
86
0
04 May 2021
Improving Attribution Methods by Learning Submodular Functions
Improving Attribution Methods by Learning Submodular Functions
Piyushi Manupriya
Tarun Ram Menta
S. Jagarlapudi
V. Balasubramanian
TDI
22
6
0
19 Apr 2021
Deep ROC Analysis and AUC as Balanced Average Accuracy to Improve Model
  Selection, Understanding and Interpretation
Deep ROC Analysis and AUC as Balanced Average Accuracy to Improve Model Selection, Understanding and Interpretation
André M. Carrington
D. Manuel
Paul Fieguth
T. Ramsay
V. Osmani
...
S. Hawken
M. McInnes
Olivia Magwood
Yusuf Sheikh
Andreas Holzinger
25
128
0
21 Mar 2021
Artificial Intelligence Narratives: An Objective Perspective on Current
  Developments
Artificial Intelligence Narratives: An Objective Perspective on Current Developments
Noah Klarmann
AI4TS
22
2
0
18 Mar 2021
Neural Network Attribution Methods for Problems in Geoscience: A Novel
  Synthetic Benchmark Dataset
Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset
Antonios Mamalakis
I. Ebert‐Uphoff
E. Barnes
OOD
28
75
0
18 Mar 2021
Interpretable Deep Learning for the Remote Characterisation of
  Ambulation in Multiple Sclerosis using Smartphones
Interpretable Deep Learning for the Remote Characterisation of Ambulation in Multiple Sclerosis using Smartphones
Andrew P. Creagh
F. Lipsmeier
M. Lindemann
M. D. Vos
24
17
0
16 Mar 2021
Explanations in Autonomous Driving: A Survey
Explanations in Autonomous Driving: A Survey
Daniel Omeiza
Helena Webb
Marina Jirotka
Lars Kunze
11
214
0
09 Mar 2021
ACTA: A Mobile-Health Solution for Integrated Nudge-Neurofeedback
  Training for Senior Citizens
ACTA: A Mobile-Health Solution for Integrated Nudge-Neurofeedback Training for Senior Citizens
Giulia Cisotto
A. Trentini
I. Zoppis
Alessio Zanga
Sara Manzoni
G. Pietrabissa
Anna Guerrini Usubini
G. Castelnuovo
16
4
0
17 Feb 2021
Convolutional Neural Network Interpretability with General Pattern
  Theory
Convolutional Neural Network Interpretability with General Pattern Theory
Erico Tjoa
Cuntai Guan
FAtt
AI4CE
16
6
0
05 Feb 2021
Learning Efficient, Explainable and Discriminative Representations for
  Pulmonary Nodules Classification
Learning Efficient, Explainable and Discriminative Representations for Pulmonary Nodules Classification
Hanliang Jiang
Fuhao Shen
Fei Gao
Weidong Han
34
83
0
19 Jan 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
39
169
0
13 Jan 2021
CycleGAN for Interpretable Online EMT Compensation
CycleGAN for Interpretable Online EMT Compensation
Henry J Krumb
Dhritimaan Das
R. Chadda
Anirban Mukhopadhyay
MedIm
19
7
0
05 Jan 2021
A Survey on Deep Learning and Explainability for Automatic Report
  Generation from Medical Images
A Survey on Deep Learning and Explainability for Automatic Report Generation from Medical Images
Pablo Messina
Pablo Pino
Denis Parra
Alvaro Soto
Cecilia Besa
S. Uribe
Marcelo andía
C. Tejos
Claudia Prieto
Daniel Capurro
MedIm
36
62
0
20 Oct 2020
Quantifying Explainability of Saliency Methods in Deep Neural Networks
  with a Synthetic Dataset
Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset
Erico Tjoa
Cuntai Guan
XAI
FAtt
11
27
0
07 Sep 2020
Survey of XAI in digital pathology
Survey of XAI in digital pathology
Milda Pocevičiūtė
Gabriel Eilertsen
Claes Lundström
11
56
0
14 Aug 2020
Feature Ranking for Semi-supervised Learning
Feature Ranking for Semi-supervised Learning
Matej Petković
S. Džeroski
D. Kocev
14
10
0
10 Aug 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
38
370
0
30 Apr 2020
Convex Density Constraints for Computing Plausible Counterfactual
  Explanations
Convex Density Constraints for Computing Plausible Counterfactual Explanations
André Artelt
Barbara Hammer
19
47
0
12 Feb 2020
On the Explanation of Machine Learning Predictions in Clinical Gait
  Analysis
On the Explanation of Machine Learning Predictions in Clinical Gait Analysis
D. Slijepcevic
Fabian Horst
Sebastian Lapuschkin
Anna-Maria Raberger
Matthias Zeppelzauer
Wojciech Samek
C. Breiteneder
W. Schöllhorn
B. Horsak
36
50
0
16 Dec 2019
On the computation of counterfactual explanations -- A survey
On the computation of counterfactual explanations -- A survey
André Artelt
Barbara Hammer
LRM
27
50
0
15 Nov 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
S. Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
37
6,110
0
22 Oct 2019
Do Explanations Reflect Decisions? A Machine-centric Strategy to
  Quantify the Performance of Explainability Algorithms
Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms
Z. Q. Lin
M. Shafiee
S. Bochkarev
Michael St. Jules
Xiao Yu Wang
A. Wong
FAtt
26
80
0
16 Oct 2019
A causal framework for explaining the predictions of black-box
  sequence-to-sequence models
A causal framework for explaining the predictions of black-box sequence-to-sequence models
David Alvarez-Melis
Tommi Jaakkola
CML
232
201
0
06 Jul 2017
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
Previous
123