This paper addresses the task of cross-modal medical image segmentation by exploring unsupervised domain adaptation (UDA) approaches. We propose a model-agnostic UDA framework, LowBridge, which builds on a simple observation that cross-modal images share some similar low-level features (e.g., edges) as they are depicting the same structures. Specifically, we first train a generative model to recover the source images from their edge features, followed by training a segmentation model on the generated source images, separately. At test time, edge features from the target images are input to the pretrained generative model to generate source-style target domain images, which are then segmented using the pretrained segmentation network. Despite its simplicity, extensive experiments on various publicly available datasets demonstrate that \proposed achieves state-of-the-art performance, outperforming eleven existing UDA approaches under different settings. Notably, further ablation studies show that \proposed is agnostic to different types of generative and segmentation models, suggesting its potential to be seamlessly plugged with the most advanced models to achieve even more outstanding results in the future. The code is available atthis https URL.
View on arXiv@article{lyu2025_2505.11909, title={ Bridging the Inter-Domain Gap through Low-Level Features for Cross-Modal Medical Image Segmentation }, author={ Pengfei Lyu and Pak-Hei Yeung and Xiaosheng Yu and Jing Xia and Jianning Chi and Chengdong Wu and Jagath C. Rajapakse }, journal={arXiv preprint arXiv:2505.11909}, year={ 2025 } }