State-of-the-art vision pretraining methods rely on image-level self-distillation from object-centric datasets such as ImageNet, implicitly assuming each image contains a single object. This assumption does not always hold: many ImageNet images already contain multiple objects. Further, it limits scalability to scene-centric datasets that better mirror real-world complexity. We address these challenges by introducing Object-level Self-DIStillation (ODIS), a pretraining approach that shifts the self-distillation granularity from whole images to individual objects. Using object-aware cropping and masked attention, ODIS isolates object-specific regions, guiding the transformer toward semantically meaningful content and transforming a noisy, scene-level task into simpler object-level sub-tasks. We show that this approach improves visual representations both at the image and patch levels. Using masks at inference time, our method achieves an impressive -NN accuracy on ImageNet1k with ViT-Large.
View on arXiv@article{hızlı2025_2506.05409, title={ Object-level Self-Distillation for Vision Pretraining }, author={ Çağlar Hızlı and Çağatay Yıldız and Pekka Marttinen }, journal={arXiv preprint arXiv:2506.05409}, year={ 2025 } }