Defining pathologies automatically from medical images aids the understanding of the emergence and progression of diseases, and such an ability is crucial in clinical diagnostics. However, existing deep learning models heavily rely on expert annotations and lack generalization capabilities in open clinical environments. In this study, we present a generalizable vision-language model for Annotation-Free pathology Localization (AFLoc). The core strength of AFLoc lies in its extensive multi-level semantic structure-based contrastive learning, which comprehensively aligns multi-granularity medical concepts from reports with abundant image features, to adapt to the diverse expressions of pathologies and unseen pathologies without the reliance on image annotations from experts. We conducted primary experiments on a dataset of 220K pairs of image-report chest X-ray images, and performed extensive validation across six external datasets encompassing 20 types of chest pathologies. The results demonstrate that AFLoc outperforms state-of-the-art methods in both annotation-free localization and classification tasks. Additionally, we assessed the generalizability of AFLoc on other modalities, including histopathology and retinal fundus images. Extensive experiments show that AFLoc exhibits robust generalization capabilities, even surpassing human benchmarks in localizing five different types of pathological images. These results highlight the potential of AFLoc in reducing annotation requirements and its applicability in complex clinical environments.
View on arXiv@article{yang2025_2401.02044, title={ Multi-modal vision-language model for generalizable annotation-free pathology localization and clinical diagnosis }, author={ Hao Yang and Hong-Yu Zhou and Jiarun Liu and Weijian Huang and Zhihuan Li and Yuanxu Gao and Cheng Li and Qiegen Liu and Yong Liang and Qi Yang and Song Wu and Tao Tan and Hairong Zheng and Kang Zhang and Shanshan Wang }, journal={arXiv preprint arXiv:2401.02044}, year={ 2025 } }