369
v1v2 (latest)

Graph Transformers Dream of Electric Flow

International Conference on Learning Representations (ICLR), 2024
Main:11 Pages
3 Figures
Bibliography:2 Pages
2 Tables
Appendix:9 Pages
Abstract

We show theoretically and empirically that the linear Transformer, when applied to graph data, can implement algorithms that solve canonical problems such as electric flow and eigenvector decomposition. The Transformer has access to information on the input graph only via the graph's incidence matrix. We present explicit weight configurations for implementing each algorithm, and we bound the constructed Transformers' errors by the errors of the underlying algorithms. Our theoretical findings are corroborated by experiments on synthetic data. Additionally, on a real-world molecular regression task, we observe that the linear Transformer is capable of learning a more effective positional encoding than the default one based on Laplacian eigenvectors. Our work is an initial step towards elucidating the inner-workings of the Transformer for graph data. Code is available atthis https URL

View on arXiv
Comments on this paper