HFGD: High-level Feature Guided Decoder for Semantic Segmentation

Existing pyramid-based upsamplers (e.g. SemanticFPN), although efficient, usually produce less accurate results compared to dilation-based models when using the same backbone. This is partially caused by the contaminated high-level features since they are fused and fine-tuned with noisy low-level features on limited data. To address this issue, we propose to use powerful pretrained high-level features as guidance (HFG) when learning to upsample the fine-grained low-level features. Specifically, the class tokens are trained along with only the high-level features from the backbone. These class tokens are reused by the upsampler for classification, guiding the upsampler features to more discriminative backbone features. One key design of the HFG is to protect the high-level features from being contaminated with proper stop-gradient operations so that the backbone does not update according to the gradient from the upsampler. To push the upper limit of HFG, we introduce an context augmentation encoder (CAE) that can efficiently and effectively operates on low-resolution high-level feature, resulting in improved representation and thus better guidance. We evaluate the proposed method on three benchmarks: Pascal Context, COCOStuff164k, and Cityscapes. Our method achieves state-of-the-art results among methods that do not use extra training data, demonstrating its effectiveness and generalization ability. The complete code will be released
View on arXiv