ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.05856
  4. Cited By
Online Fast Adaptation and Knowledge Accumulation: a New Approach to
  Continual Learning

Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning

12 March 2020
Massimo Caccia
Pau Rodríguez López
O. Ostapenko
Fabrice Normandin
Min-Bin Lin
Lucas Caccia
I. Laradji
Irina Rish
Alexande Lacoste
David Vazquez
Laurent Charlin
    CLL
    KELM
ArXivPDFHTML

Papers citing "Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning"

17 / 17 papers shown
Title
GitChameleon: Unmasking the Version-Switching Capabilities of Code
  Generation Models
GitChameleon: Unmasking the Version-Switching Capabilities of Code Generation Models
Nizar Islah
Justine Gehring
Diganta Misra
Eilif B. Muller
Irina Rish
Terry Yue Zhuo
Massimo Caccia
SyDa
40
1
0
05 Nov 2024
Sequential Learning in the Dense Associative Memory
Sequential Learning in the Dense Associative Memory
Hayden McAlister
Anthony Robins
Lech Szymanski
CLL
145
1
0
24 Sep 2024
Simple and Scalable Strategies to Continually Pre-train Large Language
  Models
Simple and Scalable Strategies to Continually Pre-train Large Language Models
Adam Ibrahim
Benjamin Thérien
Kshitij Gupta
Mats L. Richter
Quentin Anthony
Timothée Lesort
Eugene Belilovsky
Irina Rish
KELM
CLL
44
52
0
13 Mar 2024
Prototype-Sample Relation Distillation: Towards Replay-Free Continual
  Learning
Prototype-Sample Relation Distillation: Towards Replay-Free Continual Learning
Nader Asadi
Mohammad Davar
Sudhir Mudur
Rahaf Aljundi
Eugene Belilovsky
CLL
34
35
0
26 Mar 2023
Building a Subspace of Policies for Scalable Continual Learning
Building a Subspace of Policies for Scalable Continual Learning
Jean-Baptiste Gaya
T. Doan
Lucas Caccia
Laure Soulier
Ludovic Denoyer
Roberta Raileanu
CLL
29
29
0
18 Nov 2022
Continual Feature Selection: Spurious Features in Continual Learning
Continual Feature Selection: Spurious Features in Continual Learning
Timothée Lesort
CLL
22
10
0
02 Mar 2022
DualNet: Continual Learning, Fast and Slow
DualNet: Continual Learning, Fast and Slow
Quang-Cuong Pham
Chenghao Liu
S. Hoi
CLL
71
42
0
01 Oct 2021
Graceful Degradation and Related Fields
Graceful Degradation and Related Fields
J. Dymond
31
4
0
21 Jun 2021
SPeCiaL: Self-Supervised Pretraining for Continual Learning
SPeCiaL: Self-Supervised Pretraining for Continual Learning
Lucas Caccia
Joelle Pineau
CLL
SSL
21
18
0
16 Jun 2021
Towards mental time travel: a hierarchical memory for reinforcement
  learning agents
Towards mental time travel: a hierarchical memory for reinforcement learning agents
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Andrea Banino
Felix Hill
24
47
0
28 May 2021
New Insights on Reducing Abrupt Representation Change in Online
  Continual Learning
New Insights on Reducing Abrupt Representation Change in Online Continual Learning
Lucas Caccia
Rahaf Aljundi
Nader Asadi
Tinne Tuytelaars
Joelle Pineau
Eugene Belilovsky
CLL
21
185
0
11 Apr 2021
Understanding Continual Learning Settings with Data Distribution Drift
  Analysis
Understanding Continual Learning Settings with Data Distribution Drift Analysis
Timothée Lesort
Massimo Caccia
Irina Rish
25
55
0
04 Apr 2021
Continual Learning for Recurrent Neural Networks: an Empirical
  Evaluation
Continual Learning for Recurrent Neural Networks: an Empirical Evaluation
Andrea Cossu
Antonio Carta
Vincenzo Lomonaco
D. Bacciu
CLL
36
107
0
12 Mar 2021
Meta Continual Learning via Dynamic Programming
Meta Continual Learning via Dynamic Programming
R. Krishnan
Prasanna Balaprakash
CLL
19
10
0
05 Aug 2020
Gradient-based Editing of Memory Examples for Online Task-free Continual
  Learning
Gradient-based Editing of Memory Examples for Online Task-free Continual Learning
Xisen Jin
Arka Sadhu
Junyi Du
Xiang Ren
CLL
KELM
BDL
20
95
0
27 Jun 2020
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness
  of MAML
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
Aniruddh Raghu
M. Raghu
Samy Bengio
Oriol Vinyals
177
639
0
19 Sep 2019
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
338
11,684
0
09 Mar 2017
1