Are GNNs doomed by the topology of their input graph?

Graph Neural Networks (GNNs) have demonstrated remarkable success in learning from graph-structured data. However, the influence of the input graph's topology on GNN behavior remains poorly understood. In this work, we explore whether GNNs are inherently limited by the structure of their input graphs, focusing on how local topological features interact with the message-passing scheme to produce global phenomena such as oversmoothing or expressive representations. We introduce the concept of -hop similarity and investigate whether locally similar neighborhoods lead to consistent node representations. This interaction can result in either effective learning or inevitable oversmoothing, depending on the inherent properties of the graph. Our empirical experiments validate these insights, highlighting the practical implications of graph topology on GNN performance.
View on arXiv@article{aboussalah2025_2502.17739, title={ Are GNNs doomed by the topology of their input graph? }, author={ Amine Mohamed Aboussalah and Abdessalam Ed-dib }, journal={arXiv preprint arXiv:2502.17739}, year={ 2025 } }