44
0

LangDA: Building Context-Awareness via Language for Domain Adaptive Semantic Segmentation

Abstract

Unsupervised domain adaptation for semantic segmentation (DASS) aims to transfer knowledge from a label-rich source domain to a target domain with no labels. Two key approaches in DASS are (1) vision-only approaches using masking or multi-resolution crops, and (2) language-based approaches that use generic class-wise prompts informed by target domain (e.g. "a {snowy} photo of a {class}"). However, the former is susceptible to noisy pseudo-labels that are biased to the source domain. The latter does not fully capture the intricate spatial relationships of objects -- key for dense prediction tasks. To this end, we propose LangDA. LangDA addresses these challenges by, first, learning contextual relationships between objects via VLM-generated scene descriptions (e.g. "a pedestrian is on the sidewalk, and the street is lined with buildings."). Second, LangDA aligns the entire image features with text representation of this context-aware scene caption and learns generalized representations via text. With this, LangDA sets the new state-of-the-art across three DASS benchmarks, outperforming existing methods by 2.6%, 1.4% and 3.9%.

View on arXiv
@article{liu2025_2503.12780,
  title={ LangDA: Building Context-Awareness via Language for Domain Adaptive Semantic Segmentation },
  author={ Chang Liu and Bavesh Balaji and Saad Hossain and C Thomas and Kwei-Herng Lai and Raviteja Vemulapalli and Alexander Wong and Sirisha Rambhatla },
  journal={arXiv preprint arXiv:2503.12780},
  year={ 2025 }
}
Comments on this paper