ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.10573
108
0

Improving Medical Visual Representation Learning with Pathological-level Cross-Modal Alignment and Correlation Exploration

12 June 2025
Jun Wang
Lixing Zhu
Xiaohan Yu
A. Bhalerao
Yulan He
ArXiv (abs)PDFHTML
Main:10 Pages
6 Figures
Bibliography:2 Pages
Abstract

Learning medical visual representations from image-report pairs through joint learning has garnered increasing research attention due to its potential to alleviate the data scarcity problem in the medical domain. The primary challenges stem from the lengthy reports that feature complex discourse relations and semantic pathologies. Previous works have predominantly focused on instance-wise or token-wise cross-modal alignment, often neglecting the importance of pathological-level consistency. This paper presents a novel framework PLACE that promotes the Pathological-Level Alignment and enriches the fine-grained details via Correlation Exploration without additional human annotations. Specifically, we propose a novel pathological-level cross-modal alignment (PCMA) approach to maximize the consistency of pathology observations from both images and reports. To facilitate this, a Visual Pathology Observation Extractor is introduced to extract visual pathological observation representations from localized tokens. The PCMA module operates independently of any external disease annotations, enhancing the generalizability and robustness of our methods. Furthermore, we design a proxy task that enforces the model to identify correlations among image patches, thereby enriching the fine-grained details crucial for various downstream tasks. Experimental results demonstrate that our proposed framework achieves new state-of-the-art performance on multiple downstream tasks, including classification, image-to-text retrieval, semantic segmentation, object detection and report generation.

View on arXiv
@article{wang2025_2506.10573,
  title={ Improving Medical Visual Representation Learning with Pathological-level Cross-Modal Alignment and Correlation Exploration },
  author={ Jun Wang and Lixing Zhu and Xiaohan Yu and Abhir Bhalerao and Yulan He },
  journal={arXiv preprint arXiv:2506.10573},
  year={ 2025 }
}
Comments on this paper