213

Graph autoencoder with constant dimensional latent space

International Conference on Neural Information Processing (ICONIP), 2022
Main:10 Pages
1 Figures
Bibliography:2 Pages
6 Tables
Abstract

Invertible transformation of large graphs into constant dimensional vectors (embeddings) remains a challenge. In this paper we address it with recursive neural networks: The encoder and the decoder. The encoder network transforms embeddings of subgraphs into embeddings of larger subgraphs, and eventually into the embedding of the input graph. The decoder does the opposite. The dimension of the embeddings is constant regardless of the size of the (sub)graphs. Simulation experiments presented in this paper confirm that our proposed graph autoencoder can handle graphs with even thousands of vertices.

View on arXiv
Comments on this paper