ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.19089
114
1
v1v2 (latest)

Understanding Oversmoothing in GNNs as Consensus in Opinion Dynamics

31 January 2025
Keqin Wang
Yulong Yang
Ishan Saha
Christine Allen-Blanchette
ArXiv (abs)PDFHTML
Main:9 Pages
4 Figures
Bibliography:5 Pages
14 Tables
Appendix:17 Pages
Abstract

In contrast to classes of neural networks where the learned representations become increasingly expressive with network depth, the learned representations in graph neural networks (GNNs), tend to become increasingly similar. This phenomena, known as oversmoothing, is characterized by learned representations that cannot be reliably differentiated leading to reduced predictive performance. In this paper, we propose an analogy between oversmoothing in GNNs and consensus or agreement in opinion dynamics. Through this analogy, we show that the message passing structure of recent continuous-depth GNNs is equivalent to a special case of opinion dynamics (i.e., linear consensus models) which has been theoretically proven to converge to consensus (i.e., oversmoothing) for all inputs. Using the understanding developed through this analogy, we design a new continuous-depth GNN model based on nonlinear opinion dynamics and prove that our model, which we call behavior-inspired message passing neural network (BIMP) circumvents oversmoothing for general inputs. Through extensive experiments, we show that BIMP is robust to oversmoothing and adversarial attack, and consistently outperforms competitive baselines on numerous benchmarks.

View on arXiv
@article{wang2025_2501.19089,
  title={ Resolving Oversmoothing with Opinion Dissensus },
  author={ Keqin Wang and Yulong Yang and Ishan Saha and Christine Allen-Blanchette },
  journal={arXiv preprint arXiv:2501.19089},
  year={ 2025 }
}
Comments on this paper