ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.15792
24
17

Target and Task specific Source-Free Domain Adaptive Image Segmentation

29 March 2022
VS Vibashan
Jeya Maria Jose Valanarasu
Vishal M. Patel
    OOD
ArXivPDFHTML
Abstract

Solving the domain shift problem during inference is essential in medical imaging, as most deep-learning based solutions suffer from it. In practice, domain shifts are tackled by performing Unsupervised Domain Adaptation (UDA), where a model is adapted to an unlabelled target domain by leveraging the labelled source data. In medical scenarios, the data comes with huge privacy concerns making it difficult to apply standard UDA techniques. Hence, a closer clinical setting is Source-Free UDA (SFUDA), where we have access to source-trained model but not the source data during adaptation. Existing SFUDA methods rely on pseudo-label based self-training techniques to address the domain shift. However, these pseudo-labels often have high entropy due to domain shift and adapting the source model with noisy pseudo-labels leads to sub-optimal performance. To overcome this limitation, we propose a systematic two-stage approach for SFUDA comprising of target-specific adaptation followed by task-specific adaptation. In target-specific adaptation, we enhance the pseudo-label generation by minimizing high entropy regions using the proposed ensemble entropy minimization loss and a selective voting strategy. In task-specific adaptation, we exploit the enhanced pseudo-labels using a student-teacher framework to effectively learn segmentation on the target domain. We evaluate our proposed method on 2D fundus datasets and 3D MRI volumes across 7 different domain shifts where we perform better than existing UDA and SFUDA methods for medical image segmentation. Code is available at https://github.com/Vibashan/tt-sfuda.

View on arXiv
Comments on this paper