ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.22109
36
0

The quest for the GRAph Level autoEncoder (GRALE)

28 May 2025
Paul Krzakala
Gabriel Melo
Charlotte Laclau
Florence dÁlché-Buc
Rémi Flamary
ArXiv (abs)PDFHTML
Main:9 Pages
16 Figures
Bibliography:5 Pages
9 Tables
Appendix:14 Pages
Abstract

Although graph-based learning has attracted a lot of attention, graph representation learning is still a challenging task whose resolution may impact key application fields such as chemistry or biology. To this end, we introduce GRALE, a novel graph autoencoder that encodes and decodes graphs of varying sizes into a shared embedding space. GRALE is trained using an Optimal Transport-inspired loss that compares the original and reconstructed graphs and leverages a differentiable node matching module, which is trained jointly with the encoder and decoder. The proposed attention-based architecture relies on Evoformer, the core component of AlphaFold, which we extend to support both graph encoding and decoding. We show, in numerical experiments on simulated and molecular data, that GRALE enables a highly general form of pre-training, applicable to a wide range of downstream tasks, from classification and regression to more complex tasks such as graph interpolation, editing, matching, and prediction.

View on arXiv
@article{krzakala2025_2505.22109,
  title={ The quest for the GRAph Level autoEncoder (GRALE) },
  author={ Paul Krzakala and Gabriel Melo and Charlotte Laclau and Florence dÁlché-Buc and Rémi Flamary },
  journal={arXiv preprint arXiv:2505.22109},
  year={ 2025 }
}
Comments on this paper