ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.05134
  4. Cited By
What Clinicians Want: Contextualizing Explainable Machine Learning for
  Clinical End Use
v1v2 (latest)

What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use

13 May 2019
S. Tonekaboni
Shalmali Joshi
M. Mccradden
Anna Goldenberg
ArXiv (abs)PDFHTML

Papers citing "What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use"

39 / 39 papers shown
Title
ProtoECGNet: Case-Based Interpretable Deep Learning for Multi-Label ECG Classification with Contrastive Learning
ProtoECGNet: Case-Based Interpretable Deep Learning for Multi-Label ECG Classification with Contrastive Learning
Siyang Song
David Chen
Thomas Statchen
Michael C. Burkhart
Nipun Bhandari
Bashar Ramadan
Brett Beaulieu-Jones
115
1
0
11 Apr 2025
No Black Box Anymore: Demystifying Clinical Predictive Modeling with Temporal-Feature Cross Attention Mechanism
No Black Box Anymore: Demystifying Clinical Predictive Modeling with Temporal-Feature Cross Attention Mechanism
Yubo Li
Xinyu Yao
R. Padman
FAttAI4TS
83
0
0
25 Mar 2025
Self-Explaining Hypergraph Neural Networks for Diagnosis Prediction
Self-Explaining Hypergraph Neural Networks for Diagnosis Prediction
Leisheng Yu
Yanxiao Cai
Minxing Zhang
Helen Zhou
FAtt
451
0
0
15 Feb 2025
Tackling COVID-19 through Responsible AI Innovation: Five Steps in the
  Right Direction
Tackling COVID-19 through Responsible AI Innovation: Five Steps in the Right Direction
David Leslie
183
67
0
15 Aug 2020
The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
Andrés Páez
66
196
0
22 Feb 2020
Attention is not Explanation
Attention is not Explanation
Sarthak Jain
Byron C. Wallace
FAtt
148
1,329
0
26 Feb 2019
On the consistency of supervised learning with missing values
On the consistency of supervised learning with missing values
Julie Josse
Jacob M. Chen
Nicolas Prost
Erwan Scornet
Gaël Varoquaux
101
116
0
19 Feb 2019
Measuring Patient Similarities via a Deep Architecture with Medical
  Concept Embedding
Measuring Patient Similarities via a Deep Architecture with Medical Concept Embedding
Zihao Zhu
Changchang Yin
B. Qian
Yu Cheng
Jishang Wei
Fei Wang
60
118
0
09 Feb 2019
Human-Centered Tools for Coping with Imperfect Algorithms during Medical
  Decision-Making
Human-Centered Tools for Coping with Imperfect Algorithms during Medical Decision-Making
Carrie J. Cai
Emily Reif
Narayan Hegde
J. Hipp
Been Kim
...
Martin Wattenberg
F. Viégas
G. Corrado
Martin C. Stumpe
Michael Terry
107
403
0
08 Feb 2019
Can You Trust This Prediction? Auditing Pointwise Reliability After
  Learning
Can You Trust This Prediction? Auditing Pointwise Reliability After Learning
Peter F. Schulam
Suchi Saria
OOD
101
104
0
02 Jan 2019
ClinicalVis: Supporting Clinical Task-Focused Design Evaluation
ClinicalVis: Supporting Clinical Task-Focused Design Evaluation
Marzyeh Ghassemi
Mahima Pushkarna
James Wexler
Jesse Johnson
P. Varghese
39
19
0
13 Oct 2018
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAttAAMLXAI
152
1,970
0
08 Oct 2018
Model Cards for Model Reporting
Model Cards for Model Reporting
Margaret Mitchell
Simone Wu
Andrew Zaldivar
Parker Barnes
Lucy Vasserman
Ben Hutchinson
Elena Spitzer
Inioluwa Deborah Raji
Timnit Gebru
140
1,908
0
05 Oct 2018
Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing
  System Failure
Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure
Besmira Nushi
Ece Kamar
Eric Horvitz
42
141
0
19 Sep 2018
RAIM: Recurrent Attentive and Intensive Model of Multimodal Patient
  Monitoring Data
RAIM: Recurrent Attentive and Intensive Model of Multimodal Patient Monitoring Data
Yanbo Xu
Siddharth Biswal
S. Deshpande
K. Maher
Jimeng Sun
67
168
0
23 Jul 2018
RetainVis: Visual Analytics with Interpretable and Interactive Recurrent
  Neural Networks on Electronic Medical Records
RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records
Bum Chul Kwon
Min-Je Choi
J. Kim
Edward Choi
Young Bin Kim
Soonwook Kwon
Jimeng Sun
Jaegul Choo
69
252
0
28 May 2018
Manipulating and Measuring Model Interpretability
Manipulating and Measuring Model Interpretability
Forough Poursabzi-Sangdeh
D. Goldstein
Jake M. Hofman
Jennifer Wortman Vaughan
Hanna M. Wallach
99
701
0
21 Feb 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
148
3,979
0
06 Feb 2018
How do Humans Understand Explanations from Machine Learning Systems? An
  Evaluation of the Human-Interpretability of Explanation
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAttXAI
108
244
0
02 Feb 2018
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Mike Wu
M. C. Hughes
S. Parbhoo
Maurizio Zazzi
Volker Roth
Finale Doshi-Velez
AI4CE
130
283
0
16 Nov 2017
Understanding Hidden Memories of Recurrent Neural Networks
Understanding Hidden Memories of Recurrent Neural Networks
Yao Ming
Shaozu Cao
Ruixiang Zhang
Zerui Li
Yuanzhe Chen
Yangqiu Song
Huamin Qu
HAI
40
201
0
30 Oct 2017
Deep and Confident Prediction for Time Series at Uber
Deep and Confident Prediction for Time Series at Uber
Lingxue Zhu
N. Laptev
BDLAI4TS
166
345
0
06 Sep 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
254
4,281
0
22 Jun 2017
On Calibration of Modern Neural Networks
On Calibration of Modern Neural Networks
Chuan Guo
Geoff Pleiss
Yu Sun
Kilian Q. Weinberger
UQCV
299
5,871
0
14 Jun 2017
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
803
132,454
0
12 Jun 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
219
2,910
0
14 Mar 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OODFAtt
193
6,024
0
04 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAIFaML
410
3,820
0
28 Feb 2017
RETAIN: An Interpretable Predictive Model for Healthcare using Reverse
  Time Attention Mechanism
RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism
Edward Choi
M. T. Bahadori
Joshua A. Kulas
A. Schuetz
Walter F. Stewart
Jimeng Sun
AI4TS
123
1,249
0
19 Aug 2016
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
183
3,708
0
10 Jun 2016
Transferability in Machine Learning: from Phenomena to Black-Box Attacks
  using Adversarial Samples
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
SILMAAML
116
1,742
0
24 May 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,071
0
16 Feb 2016
Distilling Knowledge from Deep Networks with Applications to Healthcare
  Domain
Distilling Knowledge from Deep Networks with Applications to Healthcare Domain
Zhengping Che
S. Purushotham
R. Khemani
Yan Liu
59
139
0
11 Dec 2015
Understanding Neural Networks Through Deep Visualization
Understanding Neural Networks Through Deep Visualization
J. Yosinski
Jeff Clune
Anh Totti Nguyen
Thomas J. Fuchs
Hod Lipson
FAttAI4CE
126
1,875
0
22 Jun 2015
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCVBDL
863
9,353
0
06 Jun 2015
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
367
19,745
0
09 Mar 2015
Supersparse Linear Integer Models for Optimized Medical Scoring Systems
Supersparse Linear Integer Models for Optimized Medical Scoring Systems
Berk Ustun
Cynthia Rudin
130
354
0
15 Feb 2015
Show, Attend and Tell: Neural Image Caption Generation with Visual
  Attention
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
Ke Xu
Jimmy Ba
Ryan Kiros
Kyunghyun Cho
Aaron Courville
Ruslan Salakhutdinov
R. Zemel
Yoshua Bengio
DiffM
350
10,083
0
10 Feb 2015
Falling Rule Lists
Falling Rule Lists
Fulton Wang
Cynthia Rudin
66
258
0
21 Nov 2014
1