40
0

IGDA: Interactive Graph Discovery through Large Language Model Agents

Abstract

Large language models (LLMs\textbf{LLMs}) have emerged as a powerful method for discovery. Instead of utilizing numerical data, LLMs utilize associated variable semantic metadata\textit{semantic metadata} to predict variable relationships. Simultaneously, LLMs demonstrate impressive abilities to act as black-box optimizers when given an objective ff and sequence of trials. We study LLMs at the intersection of these two capabilities by applying LLMs to the task of interactive graph discovery\textit{interactive graph discovery}: given a ground truth graph GG^* capturing variable relationships and a budget of II edge experiments over RR rounds, minimize the distance between the predicted graph G^R\hat{G}_R and GG^* at the end of the RR-th round. To solve this task we propose IGDA\textbf{IGDA}, a LLM-based pipeline incorporating two key components: 1) an LLM uncertainty-driven method for edge experiment selection 2) a local graph update strategy utilizing binary feedback from experiments to improve predictions for unselected neighboring edges. Experiments on eight different real-world graphs show our approach often outperforms all baselines including a state-of-the-art numerical method for interactive graph discovery. Further, we conduct a rigorous series of ablations dissecting the impact of each pipeline component. Finally, to assess the impact of memorization, we apply our interactive graph discovery strategy to a complex, new (as of July 2024) causal graph on protein transcription factors, finding strong performance in a setting where memorization is impossible. Overall, our results show IGDA to be a powerful method for graph discovery complementary to existing numerically driven approaches.

View on arXiv
@article{havrilla2025_2502.17189,
  title={ IGDA: Interactive Graph Discovery through Large Language Model Agents },
  author={ Alex Havrilla and David Alvarez-Melis and Nicolo Fusi },
  journal={arXiv preprint arXiv:2502.17189},
  year={ 2025 }
}
Comments on this paper