ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.02044
22
0

A Brain Graph Foundation Model: Pre-Training and Prompt-Tuning for Any Atlas and Disorder

31 May 2025
Xinxu Wei
K. Zhao
Yong Jiao
Lifang He
Yu Zhang
    AI4CE
ArXiv (abs)PDFHTML
Main:9 Pages
8 Figures
Bibliography:5 Pages
15 Tables
Appendix:20 Pages
Abstract

As large language models (LLMs) continue to revolutionize AI research, there is a growing interest in building large-scale brain foundation models to advance neuroscience. While most existing brain foundation models are pre-trained on time-series signals or region-of-interest (ROI) features, we propose a novel graph-based pre-training paradigm for constructing a brain graph foundation model. In this paper, we introduce the Brain Graph Foundation Model, termed BrainGFM, a unified framework that leverages graph contrastive learning and graph masked autoencoders for large-scale fMRI-based pre-training. BrainGFM is pre-trained on a diverse mixture of brain atlases with varying parcellations, significantly expanding the pre-training corpus and enhancing the model's ability to generalize across heterogeneous fMRI-derived brain representations. To support efficient and versatile downstream transfer, we integrate both graph prompts and language prompts into the model design, enabling BrainGFM to flexibly adapt to a wide range of atlases, neurological and psychiatric disorders, and task settings. Furthermore, we employ meta-learning to optimize the graph prompts, facilitating strong generalization to previously unseen disorders under both few-shot and zero-shot learning conditions via language-guided prompting. BrainGFM is pre-trained on 27 neuroimaging datasets spanning 25 common neurological and psychiatric disorders, encompassing 2 types of brain atlases (functional and anatomical) across 8 widely-used parcellations, and covering over 25,000 subjects, 60,000 fMRI scans, and a total of 400,000 graph samples aggregated across all atlases and parcellations. The code is available at:this https URL

View on arXiv
@article{wei2025_2506.02044,
  title={ A Brain Graph Foundation Model: Pre-Training and Prompt-Tuning for Any Atlas and Disorder },
  author={ Xinxu Wei and Kanhao Zhao and Yong Jiao and Lifang He and Yu Zhang },
  journal={arXiv preprint arXiv:2506.02044},
  year={ 2025 }
}
Comments on this paper