Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.00329
Cited By
Continual Learning with Foundation Models: An Empirical Study of Latent Replay
30 April 2022
O. Ostapenko
Timothée Lesort
P. Rodríguez
Md Rifat Arefin
Arthur Douillard
Irina Rish
Laurent Charlin
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Continual Learning with Foundation Models: An Empirical Study of Latent Replay"
18 / 18 papers shown
Title
Low-Complexity Inference in Continual Learning via Compressed Knowledge Transfer
Zhenrong Liu
Janne M. J. Huttunen
Mikko Honkala
CLL
49
0
0
13 May 2025
Bielik 11B v2 Technical Report
Krzysztof Ociepa
Łukasz Flis
Krzysztof Wróbel
Adrian Gwoździej
Remigiusz Kinas
34
0
0
05 May 2025
Bielik v3 Small: Technical Report
Krzysztof Ociepa
Łukasz Flis
Remigiusz Kinas
Krzysztof Wróbel
Adrian Gwoździej
29
0
0
05 May 2025
Lightweight Online Adaption for Time Series Foundation Model Forecasts
Thomas L. Lee
William Toner
Rajkarn Singh
Artjom Joosem
Martin Asenov
AI4TS
38
0
0
18 Feb 2025
Self-Data Distillation for Recovering Quality in Pruned Large Language Models
Vithursan Thangarasa
Ganesh Venkatesh
Mike Lasby
Nish Sinnadurai
Sean Lie
SyDa
38
1
0
13 Oct 2024
An Empirical Analysis of Forgetting in Pre-trained Models with Incremental Low-Rank Updates
Albin Soutif--Cormerais
Simone Magistri
Joost van de Weijer
Andew D. Bagdanov
40
1
0
28 May 2024
Future-Proofing Class-Incremental Learning
Quentin Jodelet
Xin Liu
Yin Jun Phua
Tsuyoshi Murata
VLM
44
2
0
04 Apr 2024
Read Between the Layers: Leveraging Multi-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models
Kyra Ahrens
Hans Hergen Lehmann
Jae Hee Lee
Stefan Wermter
CLL
35
7
0
13 Dec 2023
Continual Pre-Training of Large Language Models: How to (re)warm your model?
Kshitij Gupta
Benjamin Thérien
Adam Ibrahim
Mats L. Richter
Quentin G. Anthony
Eugene Belilovsky
Irina Rish
Timothée Lesort
KELM
35
99
0
08 Aug 2023
Emergent and Predictable Memorization in Large Language Models
Stella Biderman
USVSN Sai Prashanth
Lintang Sutawika
Hailey Schoelkopf
Quentin G. Anthony
Shivanshu Purohit
Edward Raf
29
117
0
21 Apr 2023
A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT
Yihan Cao
Siyu Li
Yixin Liu
Zhiling Yan
Yutong Dai
Philip S. Yu
Lichao Sun
29
507
0
07 Mar 2023
A Simple Baseline that Questions the Use of Pretrained-Models in Continual Learning
Paul Janson
Wenxuan Zhang
Rahaf Aljundi
Mohamed Elhoseiny
VLM
SSL
CLL
32
52
0
10 Oct 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
213
1,661
0
15 Oct 2021
Towards Continual Knowledge Learning of Language Models
Joel Jang
Seonghyeon Ye
Sohee Yang
Joongbo Shin
Janghoon Han
Gyeonghun Kim
Stanley Jungkyu Choi
Minjoon Seo
CLL
KELM
230
151
0
07 Oct 2021
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
356
5,811
0
29 Apr 2021
How Well Does Self-Supervised Pre-Training Perform with Streaming Data?
Dapeng Hu
Shipeng Yan
Qizhengqiu Lu
Lanqing Hong
Hailin Hu
Yifan Zhang
Zhenguo Li
Xinchao Wang
Jiashi Feng
53
28
0
25 Apr 2021
ImageNet-21K Pretraining for the Masses
T. Ridnik
Emanuel Ben-Baruch
Asaf Noy
Lihi Zelnik-Manor
SSeg
VLM
CLIP
181
689
0
22 Apr 2021
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
264
4,489
0
23 Jan 2020
1