ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.11872
7
0

PRS-Med: Position Reasoning Segmentation with Vision-Language Model in Medical Imaging

17 May 2025
Quoc-Huy Trinh
Minh-Van Nguyen
Jung Peng
Ulas Bagci
Debesh Jha
ArXivPDFHTML
Abstract

Recent advancements in prompt-based medical image segmentation have enabled clinicians to identify tumors using simple input like bounding boxes or text prompts. However, existing methods face challenges when doctors need to interact through natural language or when position reasoning is required - understanding spatial relationships between anatomical structures and pathologies. We present PRS-Med, a framework that integrates vision-language models with segmentation capabilities to generate both accurate segmentation masks and corresponding spatial reasoning outputs. Additionally, we introduce the MMRS dataset (Multimodal Medical in Positional Reasoning Segmentation), which provides diverse, spatially-grounded question-answer pairs to address the lack of position reasoning data in medical imaging. PRS-Med demonstrates superior performance across six imaging modalities (CT, MRI, X-ray, ultrasound, endoscopy, RGB), significantly outperforming state-of-the-art methods in both segmentation accuracy and position reasoning. Our approach enables intuitive doctor-system interaction through natural language, facilitating more efficient diagnoses. Our dataset pipeline, model, and codebase will be released to foster further research in spatially-aware multimodal reasoning for medical applications.

View on arXiv
@article{trinh2025_2505.11872,
  title={ PRS-Med: Position Reasoning Segmentation with Vision-Language Model in Medical Imaging },
  author={ Quoc-Huy Trinh and Minh-Van Nguyen and Jung Peng and Ulas Bagci and Debesh Jha },
  journal={arXiv preprint arXiv:2505.11872},
  year={ 2025 }
}
Comments on this paper