ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.23345
55
1
v1v2 (latest)

Graph Positional Autoencoders as Self-supervised Learners

29 May 2025
Yang Liu
Deyu Bo
Wenxuan Cao
Yuan Fang
Yawen Li
C. Shi
    SSL
ArXiv (abs)PDFHTML
Main:7 Pages
3 Figures
Bibliography:4 Pages
11 Tables
Appendix:1 Pages
Abstract

Graph self-supervised learning seeks to learn effective graph representations without relying on labeled data. Among various approaches, graph autoencoders (GAEs) have gained significant attention for their efficiency and scalability. Typically, GAEs take incomplete graphs as input and predict missing elements, such as masked nodes or edges. While effective, our experimental investigation reveals that traditional node or edge masking paradigms primarily capture low-frequency signals in the graph and fail to learn the expressive structural information. To address these issues, we propose Graph Positional Autoencoders (GraphPAE), which employs a dual-path architecture to reconstruct both node features and positions. Specifically, the feature path uses positional encoding to enhance the message-passing processing, improving GAE's ability to predict the corrupted information. The position path, on the other hand, leverages node representations to refine positions and approximate eigenvectors, thereby enabling the encoder to learn diverse frequency information. We conduct extensive experiments to verify the effectiveness of GraphPAE, including heterophilic node classification, graph property prediction, and transfer learning. The results demonstrate that GraphPAE achieves state-of-the-art performance and consistently outperforms baselines by a large margin.

View on arXiv
@article{liu2025_2505.23345,
  title={ Graph Positional Autoencoders as Self-supervised Learners },
  author={ Yang Liu and Deyu Bo and Wenxuan Cao and Yuan Fang and Yawen Li and Chuan Shi },
  journal={arXiv preprint arXiv:2505.23345},
  year={ 2025 }
}
Comments on this paper