43
0

'Hello, World!': Making GNNs Talk with LLMs

Main:3 Pages
20 Figures
Bibliography:2 Pages
3 Tables
Appendix:11 Pages
Abstract

While graph neural networks (GNNs) have shown remarkable performance across diverse graph-related tasks, their high-dimensional hidden representations render them black boxes. In this work, we propose Graph Lingual Network (GLN), a GNN built on large language models (LLMs), with hidden representations in the form of human-readable text. Through careful prompt design, GLN incorporates not only the message passing module of GNNs but also advanced GNN techniques, including graph attention and initial residual connection. The comprehensibility of GLN's hidden representations enables an intuitive analysis of how node representations change (1) across layers and (2) under advanced GNN techniques, shedding light on the inner workings of GNNs. Furthermore, we demonstrate that GLN achieves strong zero-shot performance on node classification and link prediction, outperforming existing LLM-based baseline methods.

View on arXiv
@article{kim2025_2505.20742,
  title={ 'Hello, World!': Making GNNs Talk with LLMs },
  author={ Sunwoo Kim and Soo Yong Lee and Jaemin Yoo and Kijung Shin },
  journal={arXiv preprint arXiv:2505.20742},
  year={ 2025 }
}
Comments on this paper