ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.20869
47
0

PathVG: A New Benchmark and Dataset for Pathology Visual Grounding

28 February 2025
Chunlin Zhong
Shuang Hao
Junhua Wu
Xiaona Chang
Jiwei Jiang
Xiu Nie
He Tang
Xiang Bai
ArXivPDFHTML
Abstract

With the rapid development of computational pathology, many AI-assisted diagnostic tasks have emerged. Cellular nuclei segmentation can segment various types of cells for downstream analysis, but it relies on predefined categories and lacks flexibility. Moreover, pathology visual question answering can perform image-level understanding but lacks region-level detection capability. To address this, we propose a new benchmark called Pathology Visual Grounding (PathVG), which aims to detect regions based on expressions with different attributes. To evaluate PathVG, we create a new dataset named RefPath which contains 27,610 images with 33,500 language-grounded boxes. Compared to visual grounding in other domains, PathVG presents pathological images at multi-scale and contains expressions with pathological knowledge. In the experimental study, we found that the biggest challenge was the implicit information underlying the pathological expressions. Based on this, we proposed Pathology Knowledge-enhanced Network (PKNet) as the baseline model for PathVG. PKNet leverages the knowledge-enhancement capabilities of Large Language Models (LLMs) to convert pathological terms with implicit information into explicit visual features, and fuses knowledge features with expression features through the designed Knowledge Fusion Module (KFM). The proposed method achieves state-of-the-art performance on the PathVG benchmark.

View on arXiv
@article{zhong2025_2502.20869,
  title={ PathVG: A New Benchmark and Dataset for Pathology Visual Grounding },
  author={ Chunlin Zhong and Shuang Hao and Junhua Wu and Xiaona Chang and Jiwei Jiang and Xiu Nie and He Tang and Xiang Bai },
  journal={arXiv preprint arXiv:2502.20869},
  year={ 2025 }
}
Comments on this paper