ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.16707
40
0

One-Shot Domain Incremental Learning

25 March 2024
Yasushi Esaki
Satoshi Koide
Takuro Kutsuna
    CLL
    VLM
ArXivPDFHTML
Abstract

Domain incremental learning (DIL) has been discussed in previous studies on deep neural network models for classification. In DIL, we assume that samples on new domains are observed over time. The models must classify inputs on all domains. In practice, however, we may encounter a situation where we need to perform DIL under the constraint that the samples on the new domain are observed only infrequently. Therefore, in this study, we consider the extreme case where we have only one sample from the new domain, which we call one-shot DIL. We first empirically show that existing DIL methods do not work well in one-shot DIL. We have analyzed the reason for this failure through various investigations. According to our analysis, we clarify that the difficulty of one-shot DIL is caused by the statistics in the batch normalization layers. Therefore, we propose a technique regarding these statistics and demonstrate the effectiveness of our technique through experiments on open datasets. The code is available atthis https URL.

View on arXiv
@article{esaki2025_2403.16707,
  title={ One-Shot Domain Incremental Learning },
  author={ Yasushi Esaki and Satoshi Koide and Takuro Kutsuna },
  journal={arXiv preprint arXiv:2403.16707},
  year={ 2025 }
}
Comments on this paper