15
0

NOCL: Node-Oriented Conceptualization LLM for Graph Tasks without Message Passing

Main:10 Pages
11 Figures
Bibliography:4 Pages
8 Tables
Appendix:13 Pages
Abstract

Graphs are essential for modeling complex interactions across domains such as social networks, biology, and recommendation systems. Traditional Graph Neural Networks, particularly Message Passing Neural Networks (MPNNs), rely heavily on supervised learning, limiting their generalization and applicability in label-scarce scenarios. Recent self-supervised approaches still require labeled fine-tuning, limiting their effectiveness in zero-shot scenarios. Meanwhile, Large Language Models (LLMs) excel in natural language tasks but face significant challenges when applied to graphs, including preserving reasoning abilities, managing extensive token lengths from rich node attributes, and being limited to textual-attributed graphs (TAGs) and a single level task. To overcome these limitations, we propose the Node-Oriented Conceptualization LLM (NOCL), a novel framework that leverages two core techniques: 1) node description, which converts heterogeneous node attributes into structured natural language, extending LLM from TAGs to non-TAGs; 2) node concept, which encodes node descriptions into compact semantic embeddings using pretrained language models, significantly reducing token lengths by up to 93.9% compared to directly using node descriptions. Additionally, our NOCL employs graph representation descriptors to unify graph tasks at various levels into a shared, language-based query format, paving a new direction for Graph Foundation Models. Experimental results validate NOCL's competitive supervised performance relative to traditional MPNNs and hybrid LLM-MPNN methods and demonstrate superior generalization in zero-shot settings.

View on arXiv
@article{li2025_2506.10014,
  title={ NOCL: Node-Oriented Conceptualization LLM for Graph Tasks without Message Passing },
  author={ Wei Li and Mengcheng Lan and Jiaxing Xu and Yiping Ke },
  journal={arXiv preprint arXiv:2506.10014},
  year={ 2025 }
}
Comments on this paper