37
0
v1v2 (latest)

Generalizable LLM Learning of Graph Synthetic Data with Reinforcement Learning

Main:9 Pages
4 Figures
Bibliography:8 Pages
6 Tables
Appendix:7 Pages
Abstract

Previous research has sought to enhance the graph reasoning capabilities of LLMs by supervised fine-tuning on synthetic graph data. While these led to specialized LLMs better at solving graph algorithm problems, we don't need LLMs for shortest path: we need generalization from synthetic graph data to real-world tasks with implicit graph structures. In this work, we propose to unlock generalizable learning of graph synthetic data with reinforcement learning. We first design solution-based and process-based rewards for synthetic graph problems: instead of rigid memorizing response patterns in direct fine-tuning, we posit that RL would help LLMs grasp the essentials underlying graph reasoning and alleviate overfitting. We employ RL algorithms such as GRPO and DPO, aligning both off-the-shelf LLMs and LLMs fine-tuned on synthetic graph data. We then compare them against existing settings on both in-domain synthetic tasks and out-of-domain real-world tasks with implicit graph structures such as multi-hop QA, structured planning, and more. Extensive experiments demonstrate that our RL recipe leads to statistically significant improvement on 5 datasets, with an average gain of 12.9\% over baseline settings. Further analysis reveals that process-based rewards consistently outperform solution-based rewards, mixing synthetic and real-world task data yields potential gains, while compositionality and explainable intermediate steps remains a critical challenge even after RL.

View on arXiv
@article{zhang2025_2506.00845,
  title={ Generalizable LLM Learning of Graph Synthetic Data with Reinforcement Learning },
  author={ Yizhuo Zhang and Heng Wang and Shangbin Feng and Zhaoxuan Tan and Xinyun Liu and Yulia Tsvetkov },
  journal={arXiv preprint arXiv:2506.00845},
  year={ 2025 }
}
Comments on this paper