ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.10056
  4. Cited By
ProtoEEGNet: An Interpretable Approach for Detecting Interictal
  Epileptiform Discharges

ProtoEEGNet: An Interpretable Approach for Detecting Interictal Epileptiform Discharges

3 December 2023
Dennis Tang
Frank Willard
Ronan Tegerdine
Luke Triplett
Jon Donnelly
Luke Moffett
Lesia Semenova
A. Barnett
Jin Jing
Cynthia Rudin
Brandon Westover
ArXiv (abs)PDFHTML

Papers citing "ProtoEEGNet: An Interpretable Approach for Detecting Interictal Epileptiform Discharges"

5 / 5 papers shown
Title
Deformable ProtoPNet: An Interpretable Image Classifier Using Deformable
  Prototypes
Deformable ProtoPNet: An Interpretable Image Classifier Using Deformable Prototypes
Jonathan Donnelly
A. Barnett
Chaofan Chen
3DH
123
129
0
29 Nov 2021
Interpretable Machine Learning: Fundamental Principles and 10 Grand
  Challenges
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges
Cynthia Rudin
Chaofan Chen
Zhi Chen
Haiyang Huang
Lesia Semenova
Chudi Zhong
FaMLAI4CELRM
235
674
0
20 Mar 2021
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAttAAMLXAI
152
1,972
0
08 Oct 2018
Confounding variables can degrade generalization performance of
  radiological deep learning models
Confounding variables can degrade generalization performance of radiological deep learning models
J. Zech
Marcus A. Badgeley
Manway Liu
A. Costa
J. Titano
Eric K. Oermann
OOD
87
1,180
0
02 Jul 2018
This Looks Like That: Deep Learning for Interpretable Image Recognition
This Looks Like That: Deep Learning for Interpretable Image Recognition
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
268
1,187
0
27 Jun 2018
1