19
0

Segment Anyword: Mask Prompt Inversion for Open-Set Grounded Segmentation

Abstract

Open-set image segmentation poses a significant challenge because existing methods often demand extensive training or fine-tuning and generally struggle to segment unified objects consistently across diverse text reference expressions. Motivated by this, we propose Segment Anyword, a novel training-free visual concept prompt learning approach for open-set language grounded segmentation that relies on token-level cross-attention maps from a frozen diffusion model to produce segmentation surrogates or mask prompts, which are then refined into targeted object masks. Initial prompts typically lack coherence and consistency as the complexity of the image-text increases, resulting in suboptimal mask fragments. To tackle this issue, we further introduce a novel linguistic-guided visual prompt regularization that binds and clusters visual prompts based on sentence dependency and syntactic structural information, enabling the extraction of robust, noise-tolerant mask prompts, and significant improvements in segmentation accuracy. The proposed approach is effective, generalizes across different open-set segmentation tasks, and achieves state-of-the-art results of 52.5 (+6.8 relative) mIoU on Pascal Context 59, 67.73 (+25.73 relative) cIoU on gRefCOCO, and 67.4 (+1.1 relative to fine-tuned methods) mIoU on GranDf, which is the most complex open-set grounded segmentation task in the field.

View on arXiv
@article{liu2025_2505.17994,
  title={ Segment Anyword: Mask Prompt Inversion for Open-Set Grounded Segmentation },
  author={ Zhihua Liu and Amrutha Saseendran and Lei Tong and Xilin He and Fariba Yousefi and Nikolay Burlutskiy and Dino Oglic and Tom Diethe and Philip Teare and Huiyu Zhou and Chen Jin },
  journal={arXiv preprint arXiv:2505.17994},
  year={ 2025 }
}
Comments on this paper