ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.11909
32
0

Bridging the Inter-Domain Gap through Low-Level Features for Cross-Modal Medical Image Segmentation

17 May 2025
Pengfei Lyu
Pak-Hei Yeung
Xiaosheng Yu
Jing Xia
Jianning Chi
Chengdong Wu
Jagath C. Rajapakse
    OOD
    MedIm
ArXivPDFHTML
Abstract

This paper addresses the task of cross-modal medical image segmentation by exploring unsupervised domain adaptation (UDA) approaches. We propose a model-agnostic UDA framework, LowBridge, which builds on a simple observation that cross-modal images share some similar low-level features (e.g., edges) as they are depicting the same structures. Specifically, we first train a generative model to recover the source images from their edge features, followed by training a segmentation model on the generated source images, separately. At test time, edge features from the target images are input to the pretrained generative model to generate source-style target domain images, which are then segmented using the pretrained segmentation network. Despite its simplicity, extensive experiments on various publicly available datasets demonstrate that \proposed achieves state-of-the-art performance, outperforming eleven existing UDA approaches under different settings. Notably, further ablation studies show that \proposed is agnostic to different types of generative and segmentation models, suggesting its potential to be seamlessly plugged with the most advanced models to achieve even more outstanding results in the future. The code is available atthis https URL.

View on arXiv
@article{lyu2025_2505.11909,
  title={ Bridging the Inter-Domain Gap through Low-Level Features for Cross-Modal Medical Image Segmentation },
  author={ Pengfei Lyu and Pak-Hei Yeung and Xiaosheng Yu and Jing Xia and Jianning Chi and Chengdong Wu and Jagath C. Rajapakse },
  journal={arXiv preprint arXiv:2505.11909},
  year={ 2025 }
}
Comments on this paper