ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.08878
256
0

Incrementally Learning Multiple Diverse Data Domains via Multi-Source Dynamic Expansion Model

15 January 2025
RunQing Wu
Fei Ye
QiHe Liu
Guoxi Huang
Jinyu Guo
Rongyao Hu
    CLL
ArXivPDFHTML
Abstract

Continual Learning seeks to develop a model capable of incrementally assimilating new information while retaining prior knowledge. However, current research predominantly addresses a straightforward learning context, wherein all data samples originate from a singular data domain. This paper shifts focus to a more complex and realistic learning environment, characterized by data samples sourced from multiple distinct domains. We tackle this intricate learning challenge by introducing a novel methodology, termed the Multi-Source Dynamic Expansion Model (MSDEM), which leverages various pre-trained models as backbones and progressively establishes new experts based on them to adapt to emerging tasks. Additionally, we propose an innovative dynamic expandable attention mechanism designed to selectively harness knowledge from multiple backbones, thereby accelerating the new task learning. Moreover, we introduce a dynamic graph weight router that strategically reuses all previously acquired parameters and representations for new task learning, maximizing the positive knowledge transfer effect, which further improves generalization performance. We conduct a comprehensive series of experiments, and the empirical findings indicate that our proposed approach achieves state-of-the-art performance.

View on arXiv
@article{wu2025_2501.08878,
  title={ Incrementally Learning Multiple Diverse Data Domains via Multi-Source Dynamic Expansion Model },
  author={ Runqing Wu and Fei Ye and Qihe Liu and Guoxi Huang and Jinyu Guo and Rongyao Hu },
  journal={arXiv preprint arXiv:2501.08878},
  year={ 2025 }
}
Comments on this paper