ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.07035
47
0

Universal Incremental Learning: Mitigating Confusion from Inter- and Intra-task Distribution Randomness

10 March 2025
Sheng Luo
Yi Zhou
Tao Zhou
    CLL
ArXivPDFHTML
Abstract

Incremental learning (IL) aims to overcome catastrophic forgetting of previous tasks while learning new ones. Existing IL methods make strong assumptions that the incoming task type will either only increases new classes or domains (i.e. Class IL, Domain IL), or increase by a static scale in a class- and domain-agnostic manner (i.e. Versatile IL (VIL)), which greatly limit their applicability in the unpredictable and dynamic wild. In this work, we investigate Universal Incremental Learning (UIL)\textbf{Universal Incremental Learning (UIL)}Universal Incremental Learning (UIL), where a model neither knows which new classes or domains will increase along sequential tasks, nor the scale of the increments within each task. This uncertainty prevents the model from confidently learning knowledge from all task distributions and symmetrically focusing on the diverse knowledge within each task distribution. Consequently, UIL presents a more general and realistic IL scenario, making the model face confusion arising from inter-task and intra-task distribution randomness. To Mi\textbf{Mi}Mitigate both Co\textbf{Co}Confusion, we propose a simple yet effective framework for UIL, named MiCo\textbf{MiCo}MiCo. At the inter-task distribution level, we employ a multi-objective learning scheme to enforce accurate and deterministic predictions, and its effectiveness is further enhanced by a direction recalibration module that reduces conflicting gradients. Moreover, at the intra-task distribution level, we introduce a magnitude recalibration module to alleviate asymmetrical optimization towards imbalanced class distribution. Extensive experiments on three benchmarks demonstrate the effectiveness of our method, outperforming existing state-of-the-art methods in both the UIL scenario and the VIL scenario. Our code will be available at \href\href{this https URL}{here}\href.

View on arXiv
@article{luo2025_2503.07035,
  title={ Universal Incremental Learning: Mitigating Confusion from Inter- and Intra-task Distribution Randomness },
  author={ Sheng Luo and Yi Zhou and Tao Zhou },
  journal={arXiv preprint arXiv:2503.07035},
  year={ 2025 }
}
Comments on this paper