ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.08414
9
235

Unsupervised Semantic Segmentation by Distilling Feature Correspondences

16 March 2022
Mark Hamilton
Zhoutong Zhang
Bharath Hariharan
Noah Snavely
William T. Freeman
ArXivPDFHTML
Abstract

Unsupervised semantic segmentation aims to discover and localize semantically meaningful categories within image corpora without any form of annotation. To solve this task, algorithms must produce features for every pixel that are both semantically meaningful and compact enough to form distinct clusters. Unlike previous works which achieve this with a single end-to-end framework, we propose to separate feature learning from cluster compactification. Empirically, we show that current unsupervised feature learning frameworks already generate dense features whose correlations are semantically consistent. This observation motivates us to design STEGO (S\textbf{S}Self-supervised T\textbf{T}Transformer with E\textbf{E}Energy-based G\textbf{G}Graph O\textbf{O}Optimization), a novel framework that distills unsupervised features into high-quality discrete semantic labels. At the core of STEGO is a novel contrastive loss function that encourages features to form compact clusters while preserving their relationships across the corpora. STEGO yields a significant improvement over the prior state of the art, on both the CocoStuff (+14 mIoU\textbf{+14 mIoU}+14 mIoU) and Cityscapes (+9 mIoU\textbf{+9 mIoU}+9 mIoU) semantic segmentation challenges.

View on arXiv
Comments on this paper