20
0

RadSAM: Segmenting 3D radiological images with a 2D promptable model

Julien Khlaut
Elodie Ferreres
Daniel Tordjman
Hélène Philippe
Tom Boeken
Pierre Manceron
Corentin Dancette
Abstract

Medical image segmentation is a crucial and time-consuming task in clinical care, where mask precision is extremely important. The Segment Anything Model (SAM) offers a promising approach, as it provides an interactive interface based on visual prompting and edition to refine an initial segmentation. This model has strong generalization capabilities, does not rely on predefined classes, and adapts to diverse objects; however, it is pre-trained on natural images and lacks the ability to process medical data effectively. In addition, this model is built for 2D images, whereas a whole medical domain is based on 3D images, such as CT and MRI. Recent adaptations of SAM for medical imaging are based on 2D models, thus requiring one prompt per slice to segment 3D objects, making the segmentation process tedious. They also lack important features such as editing. To bridge this gap, we propose RadSAM, a novel method for segmenting 3D objects with a 2D model from a single prompt. In practice, we train a 2D model using noisy masks as initial prompts, in addition to bounding boxes and points. We then use this novel prompt type with an iterative inference pipeline to reconstruct the 3D mask slice-by-slice. We introduce a benchmark to evaluate the model's ability to segment 3D objects in CT images from a single prompt and evaluate the models' out-of-domain transfer and edition capabilities. We demonstrate the effectiveness of our approach against state-of-the-art models on this benchmark using the AMOS abdominal organ segmentation dataset.

View on arXiv
@article{khlaut2025_2504.20837,
  title={ RadSAM: Segmenting 3D radiological images with a 2D promptable model },
  author={ Julien Khlaut and Elodie Ferreres and Daniel Tordjman and Hélène Philippe and Tom Boeken and Pierre Manceron and Corentin Dancette },
  journal={arXiv preprint arXiv:2504.20837},
  year={ 2025 }
}
Comments on this paper