328

Structural and Semantic Contrastive Learning for Self-supervised Node Representation Learning

AAAI Conference on Artificial Intelligence (AAAI), 2022
Main:7 Pages
6 Figures
Bibliography:2 Pages
10 Tables
Appendix:3 Pages
Abstract

Graph Contrastive Learning (GCL) recently has drawn much research interest for learning generalizable, transferable, and robust node representations in a self-supervised fashion. In general, the contrastive learning process in GCL is performed on top of the representations learned by a graph neural network (GNN) backbone, which transforms and propagates the node contextual information based on its local neighborhoods. However, existing GCL efforts have severe limitations in terms of both encoding architecture, augmentation, and contrastive objective, making them commonly inefficient and ineffective to use in different datasets. In this work, we go beyond the existing unsupervised GCL counterparts and address their limitations by proposing a simple yet effective framework S3^3-CL. Specifically, by virtue of the proposed structural and semantic contrastive learning, even a simple neural network is able to learn expressive node representations that preserve valuable structural and semantic patterns. Our experiments demonstrate that the node representations learned by S3^3-CL achieve superior performance on different downstream tasks compared to the state-of-the-art GCL methods.

View on arXiv
Comments on this paper