ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.15183
44
18

GraphEdit: Large Language Models for Graph Structure Learning

23 February 2024
Zirui Guo
Lianghao Xia
Yanhua Yu
Yuling Wang
Zixuan Yang
Zhiyong Huang
Chao Huang
ArXivPDFHTML
Abstract

Graph Structure Learning (GSL) focuses on capturing intrinsic dependencies and interactions among nodes in graph-structured data by generating novel graph structures. Graph Neural Networks (GNNs) have emerged as promising GSL solutions, utilizing recursive message passing to encode node-wise inter-dependencies. However, many existing GSL methods heavily depend on explicit graph structural information as supervision signals, leaving them susceptible to challenges such as data noise and sparsity. In this work, we propose GraphEdit, an approach that leverages large language models (LLMs) to learn complex node relationships in graph-structured data. By enhancing the reasoning capabilities of LLMs through instruction-tuning over graph structures, we aim to overcome the limitations associated with explicit graph structural information and enhance the reliability of graph structure learning. Our approach not only effectively denoises noisy connections but also identifies node-wise dependencies from a global perspective, providing a comprehensive understanding of the graph structure. We conduct extensive experiments on multiple benchmark datasets to demonstrate the effectiveness and robustness of GraphEdit across various settings. We have made our model implementation available at:this https URL.

View on arXiv
@article{guo2025_2402.15183,
  title={ GraphEdit: Large Language Models for Graph Structure Learning },
  author={ Zirui Guo and Lianghao Xia and Yanhua Yu and Yuling Wang and Kangkang Lu and Zhiyong Huang and Chao Huang },
  journal={arXiv preprint arXiv:2402.15183},
  year={ 2025 }
}
Comments on this paper