115
9
v1v2v3 (latest)

Bayesian Learning-driven Prototypical Contrastive Loss for Class-Incremental Learning

Abstract

The primary objective of methods in continual learning is to learn tasks in a sequential manner over time (sometimes from a stream of data), while mitigating the detrimental phenomenon of catastrophic forgetting. This paper proposes a method to learn an effective representation between previous and newly encountered class prototypes. We propose a prototypical network with a Bayesian learning-driven contrastive loss (BLCL), tailored specifically for class-incremental learning scenarios. We introduce a contrastive loss that incorporates novel classes into the latent representation by reducing intra-class and increasing inter-class distance. Our approach dynamically adapts the balance between the cross-entropy and contrastive loss functions with a Bayesian learning technique. Experimental results conducted on the CIFAR-10, CIFAR-100, and ImageNet100 datasets for image classification and images of a GNSS-based dataset for interference classification validate the efficacy of our method, showcasing its superiority over existing state-of-the-art approaches. Git:this https URL

View on arXiv
@article{raichur2025_2405.11067,
  title={ Bayesian Learning-driven Prototypical Contrastive Loss for Class-Incremental Learning },
  author={ Nisha L. Raichur and Lucas Heublein and Tobias Feigl and Alexander Rügamer and Christopher Mutschler and Felix Ott },
  journal={arXiv preprint arXiv:2405.11067},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.