28
1
v1v2v3v4 (latest)

Knowledge-Augmented Multimodal Clinical Rationale Generation for Disease Diagnosis with Small Language Models

Main:8 Pages
7 Figures
Bibliography:3 Pages
6 Tables
Appendix:3 Pages
Abstract

Interpretation is critical for disease diagnosis, but existing models struggle to balance predictive accuracy with human-understandable rationales. While large language models (LLMs) offer strong reasoning abilities, their clinical use is limited by high computational costs and restricted multimodal reasoning ability. Small language models (SLMs) are efficient but lack advanced reasoning for integrating multimodal medical data. In addition, both LLMs and SLMs lack domain knowledge for trustworthy reasoning. Therefore, we propose ClinRaGen, enhancing SLMs by leveraging LLM-derived reasoning ability via rationale distillation and domain knowledge injection for trustworthy multimodal rationale generation. Key innovations include a sequential rationale distillation framework that equips SLMs with LLM-comparable multimodal reasoning abilities, and a knowledge-augmented attention mechanism that jointly unifies multimodal representation from time series and textual data in the same encoding space, enabling it to be naturally interpreted by SLMs while incorporating domain knowledge for reliable rationale generation. Experiments on real-world medical datasets show that ClinRaGen achieves state-of-the-art performance in disease diagnosis and rationale generation, demonstrating the effectiveness of combining LLM-driven reasoning with knowledge augmentation for improved interpretability.

View on arXiv
@article{niu2025_2411.07611,
  title={ Knowledge-Augmented Multimodal Clinical Rationale Generation for Disease Diagnosis with Small Language Models },
  author={ Shuai Niu and Jing Ma and Hongzhan Lin and Liang Bai and Zhihua Wang and Yida Xu and Yunya Song and Xian Yang },
  journal={arXiv preprint arXiv:2411.07611},
  year={ 2025 }
}
Comments on this paper