Information-Preserving Contrastive Learning for Self-Supervised
Representations
- SSL
Contrastive learning is very effective at learning useful representations without supervision. Yet contrastive learning has its limitations. It may learn a shortcut that is irrelevant to the downstream task, and discard relevant information. Past work has addressed this limitation via custom data augmentations that eliminate the shortcut. This solution however does not work for data modalities that are not interpretable by humans, e.g., radio signals. For such modalities, it is hard for a human to guess which shortcuts may exist in the signal, or how they can be eliminated. Even for interpretable data, sometimes eliminating the shortcut may be undesirable. The shortcut may be irrelevant to one downstream task but important to another. In this case, it is desirable to learn a representation that captures both the shortcut information and the information relevant to the other downstream task. This paper presents information-preserving contrastive learning (IPCL), a new framework for unsupervised representation learning that preserves relevant information even in the presence of shortcuts. We empirically show that the representations learned by IPCL outperforms contrastive learning in supporting different modalities and multiple diverse downstream tasks.
View on arXiv