ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.11298
10
0

Graph Representational Learning: When Does More Expressivity Hurt Generalization?

16 May 2025
Sohir Maskey
Raffaele Paolino
Fabian Jogl
Gitta Kutyniok
Johannes F. Lutzeyer
ArXivPDFHTML
Abstract

Graph Neural Networks (GNNs) are powerful tools for learning on structured data, yet the relationship between their expressivity and predictive performance remains unclear. We introduce a family of premetrics that capture different degrees of structural similarity between graphs and relate these similarities to generalization, and consequently, the performance of expressive GNNs. By considering a setting where graph labels are correlated with structural features, we derive generalization bounds that depend on the distance between training and test graphs, model complexity, and training set size. These bounds reveal that more expressive GNNs may generalize worse unless their increased complexity is balanced by a sufficiently large training set or reduced distance between training and test graphs. Our findings relate expressivity and generalization, offering theoretical insights supported by empirical results.

View on arXiv
@article{maskey2025_2505.11298,
  title={ Graph Representational Learning: When Does More Expressivity Hurt Generalization? },
  author={ Sohir Maskey and Raffaele Paolino and Fabian Jogl and Gitta Kutyniok and Johannes F. Lutzeyer },
  journal={arXiv preprint arXiv:2505.11298},
  year={ 2025 }
}
Comments on this paper