226
6
v1v2 (latest)

Cayley Graph Propagation

Main:9 Pages
8 Figures
Bibliography:4 Pages
12 Tables
Appendix:7 Pages
Abstract

In spite of the plethora of success stories with graph neural networks (GNNs) on modelling graph-structured data, they are notoriously vulnerable to over-squashing, whereby tasks necessitate the mixing of information between distance pairs of nodes. To address this problem, prior work suggests rewiring the graph structure to improve information flow. Alternatively, a significant body of research has dedicated itself to discovering and precomputing bottleneck-free graph structures to ameliorate over-squashing. One well regarded family of bottleneck-free graphs within the mathematical community are expander graphs, with prior work -- Expander Graph Propagation (EGP) -- proposing the use of a well-known expander graph family -- the Cayley graphs of the SL(2,Zn)\mathrm{SL}(2,\mathbb{Z}_n) special linear group -- as a computational template for GNNs. However, in EGP the computational graphs used are truncated to align with a given input graph. In this work, we show that truncation is detrimental to the coveted expansion properties. Instead, we propose CGP, a method to propagate information over a complete Cayley graph structure, thereby ensuring it is bottleneck-free to better alleviate over-squashing. Our empirical evidence across several real-world datasets not only shows that CGP recovers significant improvements as compared to EGP, but it is also akin to or outperforms computationally complex graph rewiring techniques.

View on arXiv
@article{wilson2025_2410.03424,
  title={ Cayley Graph Propagation },
  author={ JJ Wilson and Maya Bechler-Speicher and Petar Veličković },
  journal={arXiv preprint arXiv:2410.03424},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.