Self-Contrastive Learning: An Efficient Supervised Contrastive Framework with Single-view and Sub-network
- SSL

This paper proposes an efficient supervised contrastive learning framework, called Self-Contrastive (SelfCon) learning, that self-contrasts within multiple outputs from the different levels of a multi-exit network. SelfCon learning with a single-view does not require additional augmented samples, which resolves the concerns of multi-viewed batch (e.g., high computational cost and generalization error). Unlike the previous works based on the mutual information (MI) between the multi-views in unsupervised learning, we prove the MI bound for SelfCon loss in a supervised and single-viewed framework. We also empirically analyze that the success of SelfCon learning is related to the regularization effect from the single-view and sub-network. For ImageNet, SelfCon with a single-viewed batch improves accuracy by +0.3% with 67% memory and 45% time of Supervised Contrastive (SupCon) learning, and a simple ensemble of multi-exit outputs boost performance up to +1.4%.
View on arXiv