94
11

A Simple Yet Effective Pretraining Strategy for Graph Few-shot Learning

Abstract

Recently, increasing attention has been devoted to the graph few-shot learning problem, where the target novel classes only contain a few labeled nodes. Among many existing endeavors, episodic meta-learning has become the most prevailing paradigm, and its episodic emulation of the test environment is believed to equip the graph neural network models with adaptability to novel node classes. However, in the image domain, recent results have shown that feature reuse is more likely to be the key of meta-learning to few-shot extrapolation. Based on such observation, in this work, we propose a simple transductive fine-tuning based framework as a new paradigm for graph few-shot learning. In the proposed paradigm, a graph encoder backbone is pretrained with base classes, and a simple linear classifier is fine-tuned by the few labeled samples and is tasked to classify the unlabeled ones. For pretraining, we propose a supervised contrastive learning framework with data augmentation strategies specific for few-shot node classification to improve the extrapolation of a GNN encoder. Finally, extensive experiments conducted on three benchmark datasets demonstrate the superior advantage of our framework over the state-of-the-art methods.

View on arXiv
Comments on this paper