A Graph Perspective to Probe Structural Patterns of Knowledge in Large Language Models

Large language models have been extensively studied as neural knowledge bases for their knowledge access, editability, reasoning, and explainability. However, few works focus on the structural patterns of their knowledge. Motivated by this gap, we investigate these structural patterns from a graph perspective. We quantify the knowledge of LLMs at both the triplet and entity levels, and analyze how it relates to graph structural properties such as node degree. Furthermore, we uncover the knowledge homophily, where topologically close entities exhibit similar levels of knowledgeability, which further motivates us to develop graph machine learning models to estimate entity knowledge based on its local neighbors. This model further enables valuable knowledge checking by selecting triplets less known to LLMs. Empirical results show that using selected triplets for fine-tuning leads to superior performance.
View on arXiv@article{sahu2025_2505.19286, title={ A Graph Perspective to Probe Structural Patterns of Knowledge in Large Language Models }, author={ Utkarsh Sahu and Zhisheng Qi and Yongjia Lei and Ryan A. Rossi and Franck Dernoncourt and Nesreen K. Ahmed and Mahantesh M Halappanavar and Yao Ma and Yu Wang }, journal={arXiv preprint arXiv:2505.19286}, year={ 2025 } }