ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.13691
28
0

MEGA: Second-Order Gradient Alignment for Catastrophic Forgetting Mitigation in GFSCIL

18 April 2025
Jinhui Pang
Changqing Lin
Hao Lin
Jinglin He
Zhengjun Li
Zhihui Zhang
Xiaoshuai Hao
    CLL
ArXivPDFHTML
Abstract

Graph Few-Shot Class-Incremental Learning (GFSCIL) enables models to continually learn from limited samples of novel tasks after initial training on a large base dataset. Existing GFSCIL approaches typically utilize Prototypical Networks (PNs) for metric-based class representations and fine-tune the model during the incremental learning stage. However, these PN-based methods oversimplify learning via novel query set fine-tuning and fail to integrate Graph Continual Learning (GCL) techniques due to architectural constraints. To address these challenges, we propose a more rigorous and practical setting for GFSCIL that excludes query sets during the incremental training phase. Building on this foundation, we introduce Model-Agnostic Meta Graph Continual Learning (MEGA), aimed at effectively alleviating catastrophic forgetting for GFSCIL. Specifically, by calculating the incremental second-order gradient during the meta-training stage, we endow the model to learn high-quality priors that enhance incremental learning by aligning its behaviors across both the meta-training and incremental learning stages. Extensive experiments on four mainstream graph datasets demonstrate that MEGA achieves state-of-the-art results and enhances the effectiveness of various GCL methods in GFSCIL. We believe that our proposed MEGA serves as a model-agnostic GFSCIL paradigm, paving the way for future research.

View on arXiv
@article{pang2025_2504.13691,
  title={ MEGA: Second-Order Gradient Alignment for Catastrophic Forgetting Mitigation in GFSCIL },
  author={ Jinhui Pang and Changqing Lin and Hao Lin and Jinglin He and Zhengjun Li and Zhihui Zhang and Xiaoshuai Hao },
  journal={arXiv preprint arXiv:2504.13691},
  year={ 2025 }
}
Comments on this paper