101
0

Rethinking Random Masking in Self Distillation on ViT

Main:3 Pages
2 Figures
Bibliography:1 Pages
2 Tables
Abstract

Vision Transformers (ViTs) have demonstrated remarkable performance across a wide range of vision tasks. In particular, self-distillation frameworks such as DINO have contributed significantly to these advances. Within such frameworks, random masking is often utilized to improve training efficiency and introduce regularization. However, recent studies have raised concerns that indiscriminate random masking may inadvertently eliminate critical semantic information, motivating the development of more informed masking strategies. In this study, we explore the role of random masking in the self-distillation setting, focusing on the DINO framework. Specifically, we apply random masking exclusively to the student's global view, while preserving the student's local views and the teacher's global view in their original, unmasked forms. This design leverages DINO's multi-view augmentation scheme to retain clean supervision while inducing robustness through masked inputs. We evaluate our approach using DINO-Tiny on the mini-ImageNet dataset and show that random masking under this asymmetric setup yields more robust and fine-grained attention maps, ultimately enhancing downstream performance.

View on arXiv
@article{seong2025_2506.10582,
  title={ Rethinking Random Masking in Self Distillation on ViT },
  author={ Jihyeon Seong and Hyunkyung Han },
  journal={arXiv preprint arXiv:2506.10582},
  year={ 2025 }
}
Comments on this paper