ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.00364
44
1

From GNNs to Trees: Multi-Granular Interpretability for Graph Neural Networks

1 May 2025
Jie Yang
Yuwen Wang
Kaixuan Chen
Tongya Zheng
Yihe Zhou
Zhenbang Xiao
Ji Cao
Mingli Song
Shixuan Liu
    AI4CE
ArXivPDFHTML
Abstract

Interpretable Graph Neural Networks (GNNs) aim to reveal the underlying reasoning behind model predictions, attributing their decisions to specific subgraphs that are informative. However, existing subgraph-based interpretable methods suffer from an overemphasis on local structure, potentially overlooking long-range dependencies within the entire graphs. Although recent efforts that rely on graph coarsening have proven beneficial for global interpretability, they inevitably reduce the graphs to a fixed granularity. Such an inflexible way can only capture graph connectivity at a specific level, whereas real-world graph tasks often exhibit relationships at varying granularities (e.g., relevant interactions in proteins span from functional groups, to amino acids, and up to protein domains). In this paper, we introduce a novel Tree-like Interpretable Framework (TIF) for graph classification, where plain GNNs are transformed into hierarchical trees, with each level featuring coarsened graphs of different granularity as tree nodes. Specifically, TIF iteratively adopts a graph coarsening module to compress original graphs (i.e., root nodes of trees) into increasingly coarser ones (i.e., child nodes of trees), while preserving diversity among tree nodes within different branches through a dedicated graph perturbation module. Finally, we propose an adaptive routing module to identify the most informative root-to-leaf paths, providing not only the final prediction but also the multi-granular interpretability for the decision-making process. Extensive experiments on the graph classification benchmarks with both synthetic and real-world datasets demonstrate the superiority of TIF in interpretability, while also delivering a competitive prediction performance akin to the state-of-the-art counterparts.

View on arXiv
@article{yang2025_2505.00364,
  title={ From GNNs to Trees: Multi-Granular Interpretability for Graph Neural Networks },
  author={ Jie Yang and Yuwen Wang and Kaixuan Chen and Tongya Zheng and Yihe Zhou and Zhenbang Xiao and Ji Cao and Mingli Song and Shunyu Liu },
  journal={arXiv preprint arXiv:2505.00364},
  year={ 2025 }
}
Comments on this paper