Ditch the Denoiser: Emergence of Noise Robustness in Self-Supervised Learning from Data Curriculum

Self-Supervised Learning (SSL) has become a powerful solution to extract rich representations from unlabeled data. Yet, SSL research is mostly focused on clean, curated and high-quality datasets. As a result, applying SSL on noisy data remains a challenge, despite being crucial to applications such as astrophysics, medical imaging, geophysics or finance. In this work, we present a fully self-supervised framework that enables noise-robust representation learning without requiring a denoiser at inference or downstream fine-tuning. Our method first trains an SSL denoiser on noisy data, then uses it to construct a denoised-to-noisy data curriculum (i.e., training first on denoised, then noisy samples) for pretraining a SSL backbone (e.g., DINOv2), combined with a teacher-guided regularization that anchors noisy embeddings to their denoised counterparts. This process encourages the model to internalize noise robustness. Notably, the denoiser can be discarded after pretraining, simplifying deployment. On ImageNet-1k with ViT-B under extreme Gaussian noise (, SNR = 0.72 dB), our method improves linear probing accuracy by 4.8% over DINOv2, demonstrating that denoiser-free robustness can emerge from noise-aware pretraining. The code is available atthis https URL.
View on arXiv@article{lu2025_2505.12191, title={ Ditch the Denoiser: Emergence of Noise Robustness in Self-Supervised Learning from Data Curriculum }, author={ Wenquan Lu and Jiaqi Zhang and Hugues Van Assel and Randall Balestriero }, journal={arXiv preprint arXiv:2505.12191}, year={ 2025 } }