ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.14663
243
39
v1v2v3 (latest)

Revisiting Unsupervised Meta-Learning: Amplifying or Compensating for the Characteristics of Few-Shot Tasks

IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020
30 November 2020
Han-Jia Ye
Lu Han
De-Chuan Zhan
    OffRLSSLVLM
ArXiv (abs)PDFHTML
Abstract

Meta-learning becomes a practical approach towards few-shot image classification, where a visual recognition system is constructed with limited annotated data. Inductive bias such as embedding is learned from a base class set with ample labeled examples and then generalizes to few-shot tasks with novel classes. Surprisingly, we find that the base class set labels are not necessary, and discriminative embeddings could be meta-learned in an unsupervised manner. Comprehensive analyses indicate two modifications -- the semi-normalized distance metric and the sufficient sampling -- improves unsupervised meta-learning (UML) significantly. Based on the modified baseline, we further amplify or compensate for the characteristic of tasks when training a UML model. First, mixed embeddings are incorporated to increase the difficulty of few-shot tasks. Next, we utilize a task-specific embedding transformation to deal with the specific properties among tasks, maintaining the generalization ability into the vanilla embeddings. Experiments on few-shot learning benchmarks verify that our approaches outperform previous UML methods by a 4-10% performance gap, and embeddings learned with our UML achieve comparable or even better performance than its supervised variants.

View on arXiv
Comments on this paper