ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.10504
  4. Cited By
DILA: Dictionary Label Attention for Mechanistic Interpretability in High-dimensional Multi-label Medical Coding Prediction
v1v2 (latest)

DILA: Dictionary Label Attention for Mechanistic Interpretability in High-dimensional Multi-label Medical Coding Prediction

16 September 2024
John Wu
David Wu
Jimeng Sun
ArXiv (abs)PDFHTML

Papers citing "DILA: Dictionary Label Attention for Mechanistic Interpretability in High-dimensional Multi-label Medical Coding Prediction"

20 / 20 papers shown
Title
Surpassing GPT-4 Medical Coding with a Two-Stage Approach
Surpassing GPT-4 Medical Coding with a Two-Stage Approach
Zhichao Yang
S. S. Batra
Joel Stremmel
Eran Halperin
ELM
71
6
0
22 Nov 2023
Can Large Language Models Explain Themselves? A Study of LLM-Generated
  Self-Explanations
Can Large Language Models Explain Themselves? A Study of LLM-Generated Self-Explanations
Shiyuan Huang
Siddarth Mamidanna
Shreedhar Jangam
Yilun Zhou
Leilani H. Gilpin
LRMMILMELM
98
76
0
17 Oct 2023
Automated Medical Coding on MIMIC-III and MIMIC-IV: A Critical Review
  and Replicability Study
Automated Medical Coding on MIMIC-III and MIMIC-IV: A Critical Review and Replicability Study
Joakim Edin
Alexander Junge
Jakob Drachmann Havtorn
Lasse Borgholt
Maria Maistro
Tuukka Ruotsalo
Lars Maaløe
71
39
0
21 Apr 2023
Toy Models of Superposition
Toy Models of Superposition
Nelson Elhage
Tristan Hume
Catherine Olsson
Nicholas Schiefer
T. Henighan
...
Sam McCandlish
Jared Kaplan
Dario Amodei
Martin Wattenberg
C. Olah
AAMLMILM
193
378
0
21 Sep 2022
There is no Accuracy-Interpretability Tradeoff in Reinforcement Learning
  for Mazes
There is no Accuracy-Interpretability Tradeoff in Reinforcement Learning for Mazes
Yishay Mansour
Michal Moshkovitz
Cynthia Rudin
FAtt
56
3
0
09 Jun 2022
A Comparative Study of Faithfulness Metrics for Model Interpretability
  Methods
A Comparative Study of Faithfulness Metrics for Model Interpretability Methods
Chun Sik Chan
Huanqi Kong
Guanqing Liang
82
53
0
12 Apr 2022
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Eric Wong
Shibani Santurkar
Aleksander Madry
FAtt
58
92
0
11 May 2021
Explaining a Series of Models by Propagating Shapley Values
Explaining a Series of Models by Propagating Shapley Values
Hugh Chen
Scott M. Lundberg
Su-In Lee
TDIFAtt
83
129
0
30 Apr 2021
Interpretation of multi-label classification models using shapley values
Interpretation of multi-label classification models using shapley values
Shikun Chen
FAttTDI
73
10
0
21 Apr 2021
A Survey on Neural Network Interpretability
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaMLXAI
204
684
0
28 Dec 2020
ICD Coding from Clinical Text Using Multi-Filter Residual Convolutional
  Neural Network
ICD Coding from Clinical Text Using Multi-Filter Residual Convolutional Neural Network
Fei Li
Hong-ye Yu
56
168
0
25 Nov 2019
ERASER: A Benchmark to Evaluate Rationalized NLP Models
ERASER: A Benchmark to Evaluate Rationalized NLP Models
Jay DeYoung
Sarthak Jain
Nazneen Rajani
Eric P. Lehman
Caiming Xiong
R. Socher
Byron C. Wallace
130
638
0
08 Nov 2019
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI
  Explainability Techniques
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Vijay Arya
Rachel K. E. Bellamy
Pin-Yu Chen
Amit Dhurandhar
Michael Hind
...
Karthikeyan Shanmugam
Moninder Singh
Kush R. Varshney
Dennis L. Wei
Yunfeng Zhang
XAI
67
393
0
06 Sep 2019
Is Attention Interpretable?
Is Attention Interpretable?
Sofia Serrano
Noah A. Smith
110
684
0
09 Jun 2019
An Attentive Survey of Attention Models
An Attentive Survey of Attention Models
S. Chaudhari
Varun Mithal
Gungor Polatkan
R. Ramanath
146
662
0
05 Apr 2019
Explainable Prediction of Medical Codes from Clinical Text
Explainable Prediction of Medical Codes from Clinical Text
J. Mullenbach
Sarah Wiegreffe
J. Duke
Jimeng Sun
Jacob Eisenstein
FAtt
88
574
0
15 Feb 2018
SPINE: SParse Interpretable Neural Embeddings
SPINE: SParse Interpretable Neural Embeddings
Anant Subramanian
Danish Pruthi
Harsh Jhamtani
Taylor Berg-Kirkpatrick
Eduard H. Hovy
37
132
0
23 Nov 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,018
0
22 May 2017
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
203
3,883
0
10 Apr 2017
A survey of sparse representation: algorithms and applications
A survey of sparse representation: algorithms and applications
Zheng Zhang
Yong-mei Xu
Jian Yang
Xuelong Li
David C. Zhang
AI4TS
86
990
0
23 Feb 2016
1