Exploiting Transductive Property of Graph Convolutional Neural Networks with Less Labeling Effort

Recently, machine learning approaches on Graph data have become very popular. It was observed that significant results were obtained by including implicit or explicit logical connections between data samples that make up the data to the model. In this context, the developing GCN model has made significant experimental contributions with Convolution filters applied to graph data. This model follows Transductive and Semi-Supervised Learning approach. Due to its transductive property, all of the data samples, which is partially labeled, are given as input to the model. Labeling, which is a cost, is very important. Within the scope of this study, the following research question is tried to be answered: If at least how many samples are labeled, the optimum model success is achieved? In addition, some experimental contributions have been made on the accuracy of the model, whichever sampling approach is used with fixed labeling effort. According to the experiments, the success of the model can be increased by using the local centrality metric.
View on arXiv