5
0

Can Large Language Models Improve Spectral Graph Neural Networks?

Main:7 Pages
3 Figures
Bibliography:2 Pages
5 Tables
Abstract

Spectral Graph Neural Networks (SGNNs) have attracted significant attention due to their ability to approximate arbitrary filters. They typically rely on supervision from downstream tasks to adaptively learn appropriate filters. However, under label-scarce conditions, SGNNs may learn suboptimal filters, leading to degraded performance. Meanwhile, the remarkable success of Large Language Models (LLMs) has inspired growing interest in exploring their potential within the GNN domain. This naturally raises an important question: \textit{Can LLMs help overcome the limitations of SGNNs and enhance their performance?} In this paper, we propose a novel approach that leverages LLMs to estimate the homophily of a given graph. The estimated homophily is then used to adaptively guide the design of polynomial spectral filters, thereby improving the expressiveness and adaptability of SGNNs across diverse graph structures. Specifically, we introduce a lightweight pipeline in which the LLM generates homophily-aware priors, which are injected into the filter coefficients to better align with the underlying graph topology. Extensive experiments on benchmark datasets demonstrate that our LLM-driven SGNN framework consistently outperforms existing baselines under both homophilic and heterophilic settings, with minimal computational and monetary overhead.

View on arXiv
@article{lu2025_2506.14220,
  title={ Can Large Language Models Improve Spectral Graph Neural Networks? },
  author={ Kangkang Lu and Yanhua Yu and Zhiyong Huang and Tat-Seng Chua },
  journal={arXiv preprint arXiv:2506.14220},
  year={ 2025 }
}
Comments on this paper