ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.21644
7
0

Geometric Feature Prompting of Image Segmentation Models

27 May 2025
Kenneth Ball
Erin Taylor
Nirav Patel
Andrew Bartels
Gary Koplik
James Polly
Jay Hineman
    VLM
ArXivPDFHTML
Abstract

Advances in machine learning, especially the introduction of transformer architectures and vision transformers, have led to the development of highly capable computer vision foundation models. The segment anything model (known colloquially as SAM and more recently SAM 2), is a highly capable foundation model for segmentation of natural images and has been further applied to medical and scientific image segmentation tasks. SAM relies on prompts -- points or regions of interest in an image -- to generate associated segmentations.In this manuscript we propose the use of a geometrically motivated prompt generator to produce prompt points that are colocated with particular features of interest. Focused prompting enables the automatic generation of sensitive and specific segmentations in a scientific image analysis task using SAM with relatively few point prompts. The image analysis task examined is the segmentation of plant roots in rhizotron or minirhizotron images, which has historically been a difficult task to automate. Hand annotation of rhizotron images is laborious and often subjective; SAM, initialized with GeomPrompt local ridge prompts has the potential to dramatically improve rhizotron image processing.The authors have concurrently released an open source software suite called geompromptthis https URLthat can produce point prompts in a format that enables direct integration with the segment-anything package.

View on arXiv
@article{ball2025_2505.21644,
  title={ Geometric Feature Prompting of Image Segmentation Models },
  author={ Kenneth Ball and Erin Taylor and Nirav Patel and Andrew Bartels and Gary Koplik and James Polly and Jay Hineman },
  journal={arXiv preprint arXiv:2505.21644},
  year={ 2025 }
}
Comments on this paper