DPSeg: Dual-Prompt Cost Volume Learning for Open-Vocabulary Semantic Segmentation

Open-vocabulary semantic segmentation aims to segment images into distinct semantic regions for both seen and unseen categories at the pixel level. Current methods utilize text embeddings from pre-trained vision-language models like CLIP but struggle with the inherent domain gap between image and text embeddings, even after extensive alignment during training. Additionally, relying solely on deep text-aligned features limits shallow-level feature guidance, which is crucial for detecting small objects and fine details, ultimately reducing segmentation accuracy. To address these limitations, we propose a dual prompting framework, DPSeg, for this task. Our approach combines dual-prompt cost volume generation, a cost volume-guided decoder, and a semantic-guided prompt refinement strategy that leverages our dual prompting scheme to mitigate alignment issues in visual prompt generation. By incorporating visual embeddings from a visual prompt encoder, our approach reduces the domain gap between text and image embeddings while providing multi-level guidance through shallow features. Extensive experiments demonstrate that our method significantly outperforms existing state-of-the-art approaches on multiple public datasets.
View on arXiv@article{zhao2025_2505.11676, title={ DPSeg: Dual-Prompt Cost Volume Learning for Open-Vocabulary Semantic Segmentation }, author={ Ziyu Zhao and Xiaoguang Li and Linjia Shi and Nasrin Imanpour and Song Wang }, journal={arXiv preprint arXiv:2505.11676}, year={ 2025 } }