7
0

What Can We Learn From MIMO Graph Convolutions?

Abstract

Most graph neural networks (GNNs) utilize approximations of the general graph convolution derived in the graph Fourier domain. While GNNs are typically applied in the multi-input multi-output (MIMO) case, the approximations are performed in the single-input single-output (SISO) case. In this work, we first derive the MIMO graph convolution through the convolution theorem and approximate it directly in the MIMO case. We find the key MIMO-specific property of the graph convolution to be operating on multiple computational graphs, or equivalently, applying distinct feature transformations for each pair of nodes. As a localized approximation, we introduce localized MIMO graph convolutions (LMGCs), which generalize many linear message-passing neural networks. For almost every choice of edge weights, we prove that LMGCs with a single computational graph are injective on multisets, and the resulting representations are linearly independent when more than one computational graph is used. Our experimental results confirm that an LMGC can combine the benefits of various methods.

View on arXiv
@article{roth2025_2505.11346,
  title={ What Can We Learn From MIMO Graph Convolutions? },
  author={ Andreas Roth and Thomas Liebig },
  journal={arXiv preprint arXiv:2505.11346},
  year={ 2025 }
}
Comments on this paper