ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.14049
2
0

Learning Concept-Driven Logical Rules for Interpretable and Generalizable Medical Image Classification

20 May 2025
Yibo Gao
Hangqi Zhou
Zheyao Gao
Bomin Wang
Shangqi Gao
Sihan Wang
Xiahai Zhuang
ArXivPDFHTML
Abstract

The pursuit of decision safety in clinical applications highlights the potential of concept-based methods in medical imaging. While these models offer active interpretability, they often suffer from concept leakages, where unintended information within soft concept representations undermines both interpretability and generalizability. Moreover, most concept-based models focus solely on local explanations (instance-level), neglecting the global decision logic (dataset-level). To address these limitations, we propose Concept Rule Learner (CRL), a novel framework to learn Boolean logical rules from binarized visual concepts. CRL employs logical layers to capture concept correlations and extract clinically meaningful rules, thereby providing both local and global interpretability. Experiments on two medical image classification tasks show that CRL achieves competitive performance with existing methods while significantly improving generalizability to out-of-distribution data. The code of our work is available atthis https URL.

View on arXiv
@article{gao2025_2505.14049,
  title={ Learning Concept-Driven Logical Rules for Interpretable and Generalizable Medical Image Classification },
  author={ Yibo Gao and Hangqi Zhou and Zheyao Gao and Bomin Wang and Shangqi Gao and Sihan Wang and Xiahai Zhuang },
  journal={arXiv preprint arXiv:2505.14049},
  year={ 2025 }
}
Comments on this paper