ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.14402
  4. Cited By
Learning where to learn: Gradient sparsity in meta and continual
  learning

Learning where to learn: Gradient sparsity in meta and continual learning

27 October 2021
J. Oswald
Dominic Zhao
Seijin Kobayashi
Simon Schug
Massimo Caccia
Nicolas Zucchet
João Sacramento
    CLL
ArXivPDFHTML

Papers citing "Learning where to learn: Gradient sparsity in meta and continual learning"

14 / 14 papers shown
Title
Enhancing Few-Shot Image Classification through Learnable Multi-Scale Embedding and Attention Mechanisms
Enhancing Few-Shot Image Classification through Learnable Multi-Scale Embedding and Attention Mechanisms
Fatemeh Askari
Amirreza Fateh
Mohammad Reza Mohammadi
87
3
0
17 Jan 2025
MetaDiff: Meta-Learning with Conditional Diffusion for Few-Shot Learning
MetaDiff: Meta-Learning with Conditional Diffusion for Few-Shot Learning
Baoquan Zhang
Chuyao Luo
Demin Yu
Huiwei Lin
Xutao Li
Yunming Ye
Bowen Zhang
DiffM
40
44
0
31 Jul 2023
Online Continual Learning for Robust Indoor Object Recognition
Online Continual Learning for Robust Indoor Object Recognition
Umberto Michieli
Mete Ozay
31
9
0
19 Jul 2023
Competitive plasticity to reduce the energetic costs of learning
Competitive plasticity to reduce the energetic costs of learning
Mark C. W. van Rossum
18
2
0
04 Apr 2023
Meta-Learning with a Geometry-Adaptive Preconditioner
Meta-Learning with a Geometry-Adaptive Preconditioner
Suhyun Kang
Duhun Hwang
Moonjung Eo
Taesup Kim
Wonjong Rhee
AI4CE
37
15
0
04 Apr 2023
How to Reuse and Compose Knowledge for a Lifetime of Tasks: A Survey on
  Continual Learning and Functional Composition
How to Reuse and Compose Knowledge for a Lifetime of Tasks: A Survey on Continual Learning and Functional Composition
Jorge Armando Mendez Mendez
Eric Eaton
KELM
CLL
32
27
0
15 Jul 2022
Remember the Past: Distilling Datasets into Addressable Memories for
  Neural Networks
Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks
Zhiwei Deng
Olga Russakovsky
FedML
DD
41
94
0
06 Jun 2022
Robust Meta-learning with Sampling Noise and Label Noise via
  Eigen-Reptile
Robust Meta-learning with Sampling Noise and Label Noise via Eigen-Reptile
Dong Chen
Lingfei Wu
Siliang Tang
Xiao Yun
Bo Long
Yueting Zhuang
VLM
NoLa
31
9
0
04 Jun 2022
Continual Feature Selection: Spurious Features in Continual Learning
Continual Feature Selection: Spurious Features in Continual Learning
Timothée Lesort
CLL
30
10
0
02 Mar 2022
New Insights on Reducing Abrupt Representation Change in Online
  Continual Learning
New Insights on Reducing Abrupt Representation Change in Online Continual Learning
Lucas Caccia
Rahaf Aljundi
Nader Asadi
Tinne Tuytelaars
Joelle Pineau
Eugene Belilovsky
CLL
21
186
0
11 Apr 2021
TaskNorm: Rethinking Batch Normalization for Meta-Learning
TaskNorm: Rethinking Batch Normalization for Meta-Learning
J. Bronskill
Jonathan Gordon
James Requeima
Sebastian Nowozin
Richard Turner
71
89
0
06 Mar 2020
The large learning rate phase of deep learning: the catapult mechanism
The large learning rate phase of deep learning: the catapult mechanism
Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Narain Sohl-Dickstein
Guy Gur-Ari
ODL
159
235
0
04 Mar 2020
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness
  of MAML
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
Aniruddh Raghu
M. Raghu
Samy Bengio
Oriol Vinyals
186
640
0
19 Sep 2019
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
389
11,700
0
09 Mar 2017
1