ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.18762
54
0

Online Prototypes and Class-Wise Hypergradients for Online Continual Learning with Pre-Trained Models

26 February 2025
Nicolas Michel
Maorong Wang
Jiangpeng He
Toshihiko Yamasaki
    CLL
ArXivPDFHTML
Abstract

Continual Learning (CL) addresses the problem of learning from a data sequence where the distribution changes over time. Recently, efficient solutions leveraging Pre-Trained Models (PTM) have been widely explored in the offline CL (offCL) scenario, where the data corresponding to each incremental task is known beforehand and can be seen multiple times. However, such solutions often rely on 1) prior knowledge regarding task changes and 2) hyper-parameter search, particularly regarding the learning rate. Both assumptions remain unavailable in online CL (onCL) scenarios, where incoming data distribution is unknown and the model can observe each datum only once. Therefore, existing offCL strategies fall largely behind performance-wise in onCL, with some proving difficult or impossible to adapt to the online scenario. In this paper, we tackle both problems by leveraging Online Prototypes (OP) and Class-Wise Hypergradients (CWH). OP leverages stable output representations of PTM by updating its value on the fly to act as replay samples without requiring task boundaries or storing past data. CWH learns class-dependent gradient coefficients during training to improve over sub-optimal learning rates. We show through experiments that both introduced strategies allow for a consistent gain in accuracy when integrated with existing approaches. We will make the code fully available upon acceptance.

View on arXiv
@article{michel2025_2502.18762,
  title={ Online Prototypes and Class-Wise Hypergradients for Online Continual Learning with Pre-Trained Models },
  author={ Nicolas Michel and Maorong Wang and Jiangpeng He and Toshihiko Yamasaki },
  journal={arXiv preprint arXiv:2502.18762},
  year={ 2025 }
}
Comments on this paper