ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.03736
16
5

Towards Enabling Meta-Learning from Target Models

8 April 2021
Su Lu
Han-Jia Ye
Le Gan
De-Chuan Zhan
    CLL
ArXivPDFHTML
Abstract

Meta-learning can extract an inductive bias from previous learning experience and assist the training of new tasks. It is often realized through optimizing a meta-model with the evaluation loss of task-specific solvers. Most existing algorithms sample non-overlapping support\mathit{support}support sets and query\mathit{query}query sets to train and evaluate the solvers respectively due to simplicity (S\mathcal{S}S/Q\mathcal{Q}Q protocol). Different from S\mathcal{S}S/Q\mathcal{Q}Q protocol, we can also evaluate a task-specific solver by comparing it to a target model T\mathcal{T}T, which is the optimal model for this task or a model that behaves well enough on this task (S\mathcal{S}S/T\mathcal{T}T protocol). Although being short of research, S\mathcal{S}S/T\mathcal{T}T protocol has unique advantages such as offering more informative supervision, but it is computationally expensive. This paper looks into this special evaluation method and takes a step towards putting it into practice. We find that with a small ratio of tasks armed with target models, classic meta-learning algorithms can be improved a lot without consuming many resources. We empirically verify the effectiveness of S\mathcal{S}S/T\mathcal{T}T protocol in a typical application of meta-learning, i.e.\mathit{i.e.}i.e., few-shot learning. In detail, after constructing target models by fine-tuning the pre-trained network on those hard tasks, we match the task-specific solvers and target models via knowledge distillation.

View on arXiv
Comments on this paper