Calibrating Undisciplined Over-Smoothing in Transformer for Weakly Supervised Semantic Segmentation

Weakly supervised semantic segmentation (WSSS) has recently attracted considerable attention because it requires fewer annotations than fully supervised approaches, making it especially promising for large-scale image segmentation tasks. Although many vision transformer-based methods leverage self-attention affinity matrices to refine Class Activation Maps (CAMs), they often treat each layer's affinity equally and thus introduce considerable background noise at deeper layers, where attention tends to converge excessively on certain tokens (i.e., over-smoothing). We observe that this deep-level attention naturally converges on a subset of tokens, yet unregulated query-key affinity can generate unpredictable activation patterns (undisciplined over-smoothing), adversely affecting CAM accuracy. To address these limitations, we propose an Adaptive Re-Activation Mechanism (AReAM), which exploits shallow-level affinity to guide deeper-layer convergence in an entropy-aware manner, thereby suppressing background noise and re-activating crucial semantic regions in the CAMs. Experiments on two commonly used datasets demonstrate that AReAM substantially improves segmentation performance compared with existing WSSS methods, reducing noise while sharpening focus on relevant semantic regions. Overall, this work underscores the importance of controlling deep-level attention to mitigate undisciplined over-smoothing, introduces an entropy-aware mechanism that harmonizes shallow and deep-level affinities, and provides a refined approach to enhance transformer-based WSSS accuracy by re-activating CAMs.
View on arXiv@article{cheng2025_2305.03112, title={ Calibrating Undisciplined Over-Smoothing in Transformer for Weakly Supervised Semantic Segmentation }, author={ Lechao Cheng and Zerun Liu and Jingxuan He and Chaowei Fang and Dingwen Zhang and Meng Wang }, journal={arXiv preprint arXiv:2305.03112}, year={ 2025 } }