ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.02398
  4. Cited By
Transparency of Deep Neural Networks for Medical Image Analysis: A
  Review of Interpretability Methods

Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods

1 November 2021
Zohaib Salahuddin
Henry C. Woodruff
A. Chatterjee
Philippe Lambin
ArXivPDFHTML

Papers citing "Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods"

17 / 67 papers shown
Title
cRedAnno+: Annotation Exploitation in Self-Explanatory Lung Nodule
  Diagnosis
cRedAnno+: Annotation Exploitation in Self-Explanatory Lung Nodule Diagnosis
Jiahao Lu
Chong Yin
Kenny Erleben
M. B. Nielsen
S. Darkner
24
1
0
28 Oct 2022
Tiny-HR: Towards an interpretable machine learning pipeline for heart
  rate estimation on edge devices
Tiny-HR: Towards an interpretable machine learning pipeline for heart rate estimation on edge devices
Preetam Anbukarasu
Shailesh Nanisetty
Ganesh Tata
Nilanjan Ray
29
5
0
16 Aug 2022
Visual Interpretable and Explainable Deep Learning Models for Brain
  Tumor MRI and COVID-19 Chest X-ray Images
Visual Interpretable and Explainable Deep Learning Models for Brain Tumor MRI and COVID-19 Chest X-ray Images
Yusuf Brima
M. Atemkeng
FAtt
MedIm
29
0
0
01 Aug 2022
Improving Disease Classification Performance and Explainability of Deep
  Learning Models in Radiology with Heatmap Generators
Improving Disease Classification Performance and Explainability of Deep Learning Models in Radiology with Heatmap Generators
A. Watanabe
Sara Ketabi
Khashayar Namdar
Namdar
Farzad Khalvati
14
8
0
28 Jun 2022
Reducing Annotation Need in Self-Explanatory Models for Lung Nodule
  Diagnosis
Reducing Annotation Need in Self-Explanatory Models for Lung Nodule Diagnosis
Jiahao Lu
Chong Yin
Oswin Krause
Kenny Erleben
M. B. Nielsen
S. Darkner
MedIm
26
3
0
27 Jun 2022
Explainable Deep Learning Methods in Medical Image Classification: A
  Survey
Explainable Deep Learning Methods in Medical Image Classification: A Survey
Cristiano Patrício
João C. Neves
Luís F. Teixeira
XAI
24
52
0
10 May 2022
Attri-VAE: attribute-based interpretable representations of medical
  images with variational autoencoders
Attri-VAE: attribute-based interpretable representations of medical images with variational autoencoders
Irem Cetin
Maialen Stephens
Oscar Camara
M. A. G. Ballester
DRL
43
39
0
20 Mar 2022
Explainable Medical Imaging AI Needs Human-Centered Design: Guidelines
  and Evidence from a Systematic Review
Explainable Medical Imaging AI Needs Human-Centered Design: Guidelines and Evidence from a Systematic Review
Haomin Chen
Catalina Gomez
Chien-Ming Huang
Mathias Unberath
27
121
0
21 Dec 2021
A Survey on Neural-symbolic Learning Systems
A Survey on Neural-symbolic Learning Systems
Dongran Yu
Bo Yang
Da Liu
Hui Wang
Shirui Pan
21
55
0
10 Nov 2021
An Explainable-AI approach for Diagnosis of COVID-19 using MALDI-ToF
  Mass Spectrometry
An Explainable-AI approach for Diagnosis of COVID-19 using MALDI-ToF Mass Spectrometry
V. Seethi
Z. LaCasse
P. Chivte
Joshua Bland
Shrihari S. Kadkol
E. Gaillard
Pratool Bharti
Hamed Alhoori
24
9
0
28 Sep 2021
Interpretable Mammographic Image Classification using Case-Based
  Reasoning and Deep Learning
Interpretable Mammographic Image Classification using Case-Based Reasoning and Deep Learning
A. Barnett
F. Schwartz
Chaofan Tao
Chaofan Chen
Yinhao Ren
J. Lo
Cynthia Rudin
64
21
0
12 Jul 2021
Unbox the Black-box for the Medical Explainable AI via Multi-modal and
  Multi-centre Data Fusion: A Mini-Review, Two Showcases and Beyond
Unbox the Black-box for the Medical Explainable AI via Multi-modal and Multi-centre Data Fusion: A Mini-Review, Two Showcases and Beyond
Guang Yang
Qinghao Ye
Jun Xia
92
480
0
03 Feb 2021
Using StyleGAN for Visual Interpretability of Deep Learning Models on
  Medical Images
Using StyleGAN for Visual Interpretability of Deep Learning Models on Medical Images
K. Schutte
O. Moindrot
P. Hérent
Jean-Baptiste Schiratti
S. Jégou
FAtt
MedIm
39
60
0
19 Jan 2021
Explaining the Black-box Smoothly- A Counterfactual Approach
Explaining the Black-box Smoothly- A Counterfactual Approach
Junyu Chen
Yong Du
Yufan He
W. Paul Segars
Ye Li
MedIm
FAtt
65
100
0
11 Jan 2021
A Style-Based Generator Architecture for Generative Adversarial Networks
A Style-Based Generator Architecture for Generative Adversarial Networks
Tero Karras
S. Laine
Timo Aila
300
10,368
0
12 Dec 2018
Learn To Pay Attention
Learn To Pay Attention
Saumya Jetley
Nicholas A. Lord
Namhoon Lee
Philip Torr
67
437
0
06 Apr 2018
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,690
0
28 Feb 2017
Previous
12