397

Steering Prototype with Prompt-tuning for Rehearsal-free Continual Learning

IEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2023
Main:7 Pages
11 Figures
Bibliography:4 Pages
8 Tables
Appendix:9 Pages
Abstract

Prototype, as a representation of class embeddings, has been explored to reduce memory footprint or mitigate forgetting for continual learning scenarios. However, prototype-based methods still suffer from abrupt performance deterioration due to semantic drift and prototype interference. In this study, we propose Contrastive Prototypical Prompt (CPP) and show that task-specific prompt-tuning, when optimized over a contrastive learning objective, can effectively address both obstacles and significantly improve the potency of prototypes. Our experiments demonstrate that CPP excels in four challenging class-incremental learning benchmarks, resulting in 4% to 6% absolute improvements over state-of-the-art methods. Moreover, CPP does not require a rehearsal buffer and it largely bridges the performance gap between continual learning and offline joint-learning, showcasing a promising design scheme for continual learning systems under a Transformer architecture.

View on arXiv
Comments on this paper