On Recoverability of Graph Neural Network Representations
Despite their growing popularity, graph neural networks (GNNs) still have multiple unsolved problems, including lack of embedding expressiveness, propagation of information to distant nodes, and training on large-scale graphs. Understanding the roots of and providing solutions for such problems require developing analytic tools and techniques. In this work, we propose the notion of recoverability, which measures the amount of information contained in a random variable for being able to recover another one form it. We provide a method for an efficient empirical estimation of recoverability, demonstrate a tight relationship of it to information aggregation in GNNs, and show how this new concept can be used in unsupervised graph representation learning. We demonstrate, through extensive experimental results on various datasets and different GNN architectures, that estimated recoverability correlates with aggregation method expressivity and graph sparsification quality, the GNN representations can be learned using our unsupervised approach, and the recoverability regularization can mitigating accuracy drop caused by expanding of GNN depth. The code to reproduce our experiments is available at https://github.com/Anonymous1252022/Recoverability
View on arXiv