ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.02027
19
72

Rethinking Graph Regularization for Graph Neural Networks

4 September 2020
Han Yang
Kaili Ma
James Cheng
    AI4CE
ArXivPDFHTML
Abstract

The graph Laplacian regularization term is usually used in semi-supervised representation learning to provide graph structure information for a model f(X)f(X)f(X). However, with the recent popularity of graph neural networks (GNNs), directly encoding graph structure AAA into a model, i.e., f(A,X)f(A, X)f(A,X), has become the more common approach. While we show that graph Laplacian regularization brings little-to-no benefit to existing GNNs, and propose a simple but non-trivial variant of graph Laplacian regularization, called Propagation-regularization (P-reg), to boost the performance of existing GNN models. We provide formal analyses to show that P-reg not only infuses extra information (that is not captured by the traditional graph Laplacian regularization) into GNNs, but also has the capacity equivalent to an infinite-depth graph convolutional network. We demonstrate that P-reg can effectively boost the performance of existing GNN models on both node-level and graph-level tasks across many different datasets.

View on arXiv
Comments on this paper