ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.11607
24
0
v1v2 (latest)

GraphThought: Graph Combinatorial Optimization with Thought Generation

17 February 2025
Zixiao Huang
Lifeng Guo
Wenhao Li
Junjie Sheng
Chuyun Shen
Haosheng Chen
Bo Jin
Changhong Lu
    LRMAI4CE
ArXiv (abs)PDFHTML
Main:9 Pages
5 Figures
Bibliography:2 Pages
17 Tables
Appendix:23 Pages
Abstract

Large language models (LLMs) have demonstrated remarkable capabilities across various domains, especially in text processing and generative tasks. Recent advancements in the reasoning capabilities of state-of-the-art LLMs, such as OpenAI-o1, have significantly broadened their applicability, particularly in complex problem-solving and logical inference. However, most existing LLMs struggle with notable limitations in handling graph combinatorial optimization (GCO) problems. To bridge this gap, we formally define the Optimal Thoughts Design (OTD) problem, including its state and action thought space. We then introduce a novel framework, GraphThought, designed to generate high-quality thought datasets for GCO problems. Leveraging these datasets, we fine-tune the Llama-3-8B-Instruct model to develop Llama-GT. Notably, despite its compact 8B-parameter architecture, Llama-GT matches the performance of state-of-the-art LLMs on the GraphArena benchmark. Experimental results show that our approach outperforms both proprietary and open-source models, even rivaling specialized models like o1-mini. This work sets a new state-of-the-art benchmark while challenging the prevailing notion that model scale is the primary driver of reasoning capability.

View on arXiv
@article{huang2025_2502.11607,
  title={ GraphThought: Graph Combinatorial Optimization with Thought Generation },
  author={ Zixiao Huang and Lifeng Guo and Wenhao Li and Junjie Sheng and Chuyun Shen and Haosheng Chen and Bo Jin and Changhong Lu and Xiangfeng Wang },
  journal={arXiv preprint arXiv:2502.11607},
  year={ 2025 }
}
Comments on this paper