ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.01501
  4. Cited By
Meta-Learning with Less Forgetting on Large-Scale Non-Stationary Task
  Distributions

Meta-Learning with Less Forgetting on Large-Scale Non-Stationary Task Distributions

3 September 2022
Zhenyi Wang
Li Shen
Le Fang
Qiuling Suo
Dongling Zhan
Tiehang Duan
Mingchen Gao
    OOD
    CLL
ArXivPDFHTML

Papers citing "Meta-Learning with Less Forgetting on Large-Scale Non-Stationary Task Distributions"

9 / 9 papers shown
Title
Learning to Learn from APIs: Black-Box Data-Free Meta-Learning
Learning to Learn from APIs: Black-Box Data-Free Meta-Learning
Zixuan Hu
Li Shen
Zhenyi Wang
Baoyuan Wu
Chun Yuan
Dacheng Tao
49
7
0
28 May 2023
Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning
Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning
Zixuan Hu
Li Shen
Zhenyi Wang
Tongliang Liu
Chun Yuan
Dacheng Tao
47
4
0
20 Mar 2023
Meta Learning on a Sequence of Imbalanced Domains with Difficulty
  Awareness
Meta Learning on a Sequence of Imbalanced Domains with Difficulty Awareness
Zhenyi Wang
Tiehang Duan
Le Fang
Qiuling Suo
Mingchen Gao
196
18
0
29 Sep 2021
Memory-Efficient Semi-Supervised Continual Learning: The World is its
  Own Replay Buffer
Memory-Efficient Semi-Supervised Continual Learning: The World is its Own Replay Buffer
James Smith
Jonathan C. Balloch
Yen-Chang Hsu
Z. Kira
CLL
131
36
0
23 Jan 2021
Semi-Supervised Dialogue Policy Learning via Stochastic Reward
  Estimation
Semi-Supervised Dialogue Policy Learning via Stochastic Reward Estimation
Xinting Huang
Jianzhong Qi
Yu Sun
Rui Zhang
OffRL
69
18
0
09 May 2020
Adversarial Continual Learning
Adversarial Continual Learning
Sayna Ebrahimi
Franziska Meier
Roberto Calandra
Trevor Darrell
Marcus Rohrbach
CLL
VLM
152
199
0
21 Mar 2020
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness
  of MAML
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
Aniruddh Raghu
M. Raghu
Samy Bengio
Oriol Vinyals
186
640
0
19 Sep 2019
Probabilistic Model-Agnostic Meta-Learning
Probabilistic Model-Agnostic Meta-Learning
Chelsea Finn
Kelvin Xu
Sergey Levine
BDL
176
666
0
07 Jun 2018
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
383
11,700
0
09 Mar 2017
1