ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.02130
33
11

GITA: Graph to Visual and Textual Integration for Vision-Language Graph Reasoning

3 February 2024
Yanbin Wei
Shuai Fu
Weisen Jiang
Zejian Zhang
Zhixiong Zeng
Qi Wu
James T. Kwok
Yu Zhang
ArXivPDFHTML
Abstract

Large Language Models (LLMs) are increasingly used for various tasks with graph structures. Though LLMs can process graph information in a textual format, they overlook the rich vision modality, which is an intuitive way for humans to comprehend structural information and conduct general graph reasoning. The potential benefits and capabilities of representing graph structures as visual images (i.e., visual graph\textit{visual graph}visual graph) are still unexplored. To fill the gap, we innovatively propose an end-to-end framework, called G\textbf{G}Graph to vI\textbf{I}Isual and T\textbf{T}Textual IntegrA\textbf{A}Ation (GITA), which firstly incorporates visual graphs into general graph reasoning. Besides, we establish G\textbf{G}Graph-based V\textbf{V}Vision-L\textbf{L}Language Q\textbf{Q}Question A\textbf{A}Answering (GVLQA) dataset from existing graph data, which is the first vision-language dataset for general graph reasoning purposes. Extensive experiments on the GVLQA dataset and five real-world datasets show that GITA outperforms mainstream LLMs in terms of general graph reasoning capabilities. Moreover, We highlight the effectiveness of the layout augmentation on visual graphs and pretraining on the GVLQA dataset.

View on arXiv
Comments on this paper