116
0

Ignoring Directionality Leads to Compromised Graph Neural Network Explanations

Abstract

Graph Neural Networks (GNNs) are increasingly used in critical domains, where reliable explanations are vital for supporting human decision-making. However, the common practice of graph symmetrization discards directional information, leading to significant information loss and misleading explanations. Our analysis demonstrates how this practice compromises explanation fidelity. Through theoretical and empirical studies, we show that preserving directional semantics significantly improves explanation quality, ensuring more faithful insights for human decision-makers. These findings highlight the need for direction-aware GNN explainability in security-critical applications.

View on arXiv
@article{sun2025_2506.04608,
  title={ Ignoring Directionality Leads to Compromised Graph Neural Network Explanations },
  author={ Changsheng Sun and Xinke Li and Jin Song Dong },
  journal={arXiv preprint arXiv:2506.04608},
  year={ 2025 }
}
Comments on this paper