ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.16218
83
0

GNN-Transformer Cooperative Architecture for Trustworthy Graph Contrastive Learning

18 December 2024
Jianqing Liang
Xinkai Wei
Min Chen
Zhiqiang Wang
Jiye Liang
ArXivPDFHTML
Abstract

Graph contrastive learning (GCL) has become a hot topic in the field of graph representation learning. In contrast to traditional supervised learning relying on a large number of labels, GCL exploits augmentation strategies to generate multiple views and positive/negative pairs, both of which greatly influence the performance. Unfortunately, commonly used random augmentations may disturb the underlying semantics of graphs. Moreover, traditional GNNs, a type of widely employed encoders in GCL, are inevitably confronted with over-smoothing and over-squashing problems. To address these issues, we propose GNN-Transformer Cooperative Architecture for Trustworthy Graph Contrastive Learning (GTCA), which inherits the advantages of both GNN and Transformer, incorporating graph topology to obtain comprehensive graph representations. Theoretical analysis verifies the trustworthiness of the proposed method. Extensive experiments on benchmark datasets demonstrate state-of-the-art empirical performance.

View on arXiv
@article{liang2025_2412.16218,
  title={ GNN-Transformer Cooperative Architecture for Trustworthy Graph Contrastive Learning },
  author={ Jianqing Liang and Xinkai Wei and Min Chen and Zhiqiang Wang and Jiye Liang },
  journal={arXiv preprint arXiv:2412.16218},
  year={ 2025 }
}
Comments on this paper