23
0

Efficient Medical Vision-Language Alignment Through Adapting Masked Vision Models

Abstract

Medical vision-language alignment through cross-modal contrastive learning shows promising performance in image-text matching tasks, such as retrieval and zero-shot classification. However, conventional cross-modal contrastive learning (CLIP-based) methods suffer from suboptimal visual representation capabilities, which also limits their effectiveness in vision-language alignment. In contrast, although the models pretrained via multimodal masked modeling struggle with direct cross-modal matching, they excel in visual representation. To address this contradiction, we propose ALTA (ALign Through Adapting), an efficient medical vision-language alignment method that utilizes only about 8% of the trainable parameters and less than 1/5 of the computational consumption required for masked record modeling. ALTA achieves superior performance in vision-language matching tasks like retrieval and zero-shot classification by adapting the pretrained vision model from masked record modeling. Additionally, we integrate temporal-multiview radiograph inputs to enhance the information consistency between radiographs and their corresponding descriptions in reports, further improving the vision-language alignment. Experimental evaluations show that ALTA outperforms the best-performing counterpart by over 4% absolute points in text-to-image accuracy and approximately 6% absolute points in image-to-text retrieval accuracy. The adaptation of vision-language models during efficient alignment also promotes better vision and language understanding. Code is publicly available atthis https URL.

View on arXiv
@article{lian2025_2506.08990,
  title={ Efficient Medical Vision-Language Alignment Through Adapting Masked Vision Models },
  author={ Chenyu Lian and Hong-Yu Zhou and Dongyun Liang and Jing Qin and Liansheng Wang },
  journal={arXiv preprint arXiv:2506.08990},
  year={ 2025 }
}
Comments on this paper