ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.07074
176
1

Let's Ask GNN: Empowering Large Language Model for Graph In-Context Learning

9 October 2024
Zhengyu Hu
Yichuan Li
Zhengyu Chen
J. Wang
Han Liu
Kyumin Lee
Kaize Ding
    GNN
ArXivPDFHTML
Abstract

Textual Attributed Graphs (TAGs) are crucial for modeling complex real-world systems, yet leveraging large language models (LLMs) for TAGs presents unique challenges due to the gap between sequential text processing and graph-structured data. We introduce AskGNN, a novel approach that bridges this gap by leveraging In-Context Learning (ICL) to integrate graph data and task-specific information into LLMs. AskGNN employs a Graph Neural Network (GNN)-powered structure-enhanced retriever to select labeled nodes across graphs, incorporating complex graph structures and their supervision signals. Our learning-to-retrieve algorithm optimizes the retriever to select example nodes that maximize LLM performance on graph. Experiments across three tasks and seven LLMs demonstrate AskGNN's superior effectiveness in graph task performance, opening new avenues for applying LLMs to graph-structured data without extensive fine-tuning.

View on arXiv
@article{hu2025_2410.07074,
  title={ Let's Ask GNN: Empowering Large Language Model for Graph In-Context Learning },
  author={ Zhengyu Hu and Yichuan Li and Zhengyu Chen and Jingang Wang and Han Liu and Kyumin Lee and Kaize Ding },
  journal={arXiv preprint arXiv:2410.07074},
  year={ 2025 }
}
Comments on this paper