151
670

Learning Embedding Adaptation for Few-Shot Learning

Abstract

Learning with limited data is a key challenge for visual recognition. Few-shot learning methods address this challenge by learning an instance embedding function from seen classes and apply the function to instances from unseen classes with limited labels. This style of transfer learning is task-agnostic: the embedding function is not learned optimally discriminative with respect to the unseen classes, where discerning among them is the target task. In this paper, we propose a novel approach to adapt the embedding model to the target classification task, yielding embeddings that are task-specific and are discriminative. To this end, we employ a type of self-attention mechanism called Transformer to transform the embeddings from task-agnostic to task-specific by focusing on relating instances from the test instances to the training instances in both seen and unseen classes. We verify the effectiveness of our model on both the standard few-shot classification benchmark and four extended few-shot learning settings with essential use cases (i.e. cross-domain, transductive, generalized few-shot learning, and large scale low-shot learning). Our approach archived consistent improvements over baseline models and previous state-of-the-art methods.

View on arXiv
Comments on this paper