ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.09125
26
10

Understanding Heterophily for Graph Neural Networks

17 January 2024
Junfu Wang
Yuanfang Guo
Liang Yang
Yun-an Wang
ArXivPDFHTML
Abstract

Graphs with heterophily have been regarded as challenging scenarios for Graph Neural Networks (GNNs), where nodes are connected with dissimilar neighbors through various patterns. In this paper, we present theoretical understandings of the impacts of different heterophily patterns for GNNs by incorporating the graph convolution (GC) operations into fully connected networks via the proposed Heterophilous Stochastic Block Models (HSBM), a general random graph model that can accommodate diverse heterophily patterns. Firstly, we show that by applying a GC operation, the separability gains are determined by two factors, i.e., the Euclidean distance of the neighborhood distributions and E[deg⁡]\sqrt{\mathbb{E}\left[\operatorname{deg}\right]}E[deg]​, where E[deg⁡]\mathbb{E}\left[\operatorname{deg}\right]E[deg] is the averaged node degree. It reveals that the impact of heterophily on classification needs to be evaluated alongside the averaged node degree. Secondly, we show that the topological noise has a detrimental impact on separability, which is equivalent to degrading E[deg⁡]\mathbb{E}\left[\operatorname{deg}\right]E[deg]. Finally, when applying multiple GC operations, we show that the separability gains are determined by the normalized distance of the lll-powered neighborhood distributions. It indicates that the nodes still possess separability as lll goes to infinity in a wide range of regimes. Extensive experiments on both synthetic and real-world data verify the effectiveness of our theory.

View on arXiv
Comments on this paper