ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.15959
31
0

A Knowledge Distillation-Based Approach to Enhance Transparency of Classifier Models

21 February 2025
Yuchen Jiang
Xinyuan Zhao
Yihang Wu
A. Chaddad
    MedIm
ArXivPDFHTML
Abstract

With the rapid development of artificial intelligence (AI), especially in the medical field, the need for its explainability has grown. In medical image analysis, a high degree of transparency and model interpretability can help clinicians better understand and trust the decision-making process of AI models. In this study, we propose a Knowledge Distillation (KD)-based approach that aims to enhance the transparency of the AI model in medical image analysis. The initial step is to use traditional CNN to obtain a teacher model and then use KD to simplify the CNN architecture, retain most of the features of the data set, and reduce the number of network layers. It also uses the feature map of the student model to perform hierarchical analysis to identify key features and decision-making processes. This leads to intuitive visual explanations. We selected three public medical data sets (brain tumor, eye disease, and Alzheimer's disease) to test our method. It shows that even when the number of layers is reduced, our model provides a remarkable result in the test set and reduces the time required for the interpretability analysis.

View on arXiv
@article{jiang2025_2502.15959,
  title={ A Knowledge Distillation-Based Approach to Enhance Transparency of Classifier Models },
  author={ Yuchen Jiang and Xinyuan Zhao and Yihang Wu and Ahmad Chaddad },
  journal={arXiv preprint arXiv:2502.15959},
  year={ 2025 }
}
Comments on this paper