ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2510.10105
105
0

Lighter-X: An Efficient and Plug-and-play Strategy for Graph-based Recommendation through Decoupled Propagation

Proceedings of the VLDB Endowment (PVLDB), 2025
11 October 2025
Yanping Zheng
Zhewei Wei
Frank de Hoog
Xu Chen
Hongteng Xu
Yuhang Ye
Jiadeng Huang
ArXiv (abs)PDFHTMLGithub
Main:8 Pages
8 Figures
Bibliography:1 Pages
17 Tables
Appendix:7 Pages
Abstract

Graph Neural Networks (GNNs) have demonstrated remarkable effectiveness in recommendation systems. However, conventional graph-based recommenders, such as LightGCN, require maintaining embeddings of size ddd for each node, resulting in a parameter complexity of O(n×d)\mathcal{O}(n \times d)O(n×d), where nnn represents the total number of users and items. This scaling pattern poses significant challenges for deployment on large-scale graphs encountered in real-world applications. To address this scalability limitation, we propose \textbf{Lighter-X}, an efficient and modular framework that can be seamlessly integrated with existing GNN-based recommender architectures. Our approach substantially reduces both parameter size and computational complexity while preserving the theoretical guarantees and empirical performance of the base models, thereby enabling practical deployment at scale. Specifically, we analyze the original structure and inherent redundancy in their parameters, identifying opportunities for optimization. Based on this insight, we propose an efficient compression scheme for the sparse adjacency structure and high-dimensional embedding matrices, achieving a parameter complexity of O(h×d)\mathcal{O}(h \times d)O(h×d), where h≪nh \ll nh≪n. Furthermore, the model is optimized through a decoupled framework, reducing computational complexity during the training process and enhancing scalability. Extensive experiments demonstrate that Lighter-X achieves comparable performance to baseline models with significantly fewer parameters. In particular, on large-scale interaction graphs with millions of edges, we are able to attain even better results with only 1\% of the parameter over LightGCN.

View on arXiv
Comments on this paper