24
0

An Augmentation-Aware Theory for Self-Supervised Contrastive Learning

Main:8 Pages
11 Figures
Bibliography:3 Pages
Appendix:8 Pages
Abstract

Self-supervised contrastive learning has emerged as a powerful tool in machine learning and computer vision to learn meaningful representations from unlabeled data. Meanwhile, its empirical success has encouraged many theoretical studies to reveal the learning mechanisms. However, in the existing theoretical research, the role of data augmentation is still under-exploited, especially the effects of specific augmentation types. To fill in the blank, we for the first time propose an augmentation-aware error bound for self-supervised contrastive learning, showing that the supervised risk is bounded not only by the unsupervised risk, but also explicitly by a trade-off induced by data augmentation. Then, under a novel semantic label assumption, we discuss how certain augmentation methods affect the error bound. Lastly, we conduct both pixel- and representation-level experiments to verify our proposed theoretical results.

View on arXiv
@article{cui2025_2505.22196,
  title={ An Augmentation-Aware Theory for Self-Supervised Contrastive Learning },
  author={ Jingyi Cui and Hongwei Wen and Yisen Wang },
  journal={arXiv preprint arXiv:2505.22196},
  year={ 2025 }
}
Comments on this paper