ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.12651
  4. Cited By
Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less
  Forgetting

Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting

27 April 2020
Sanyuan Chen
Yutai Hou
Yiming Cui
Wanxiang Che
Ting Liu
Xiangzhan Yu
    KELM
    CLL
ArXivPDFHTML

Papers citing "Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting"

45 / 45 papers shown
Title
Memorization and Knowledge Injection in Gated LLMs
Memorization and Knowledge Injection in Gated LLMs
Xu Pan
Ely Hahami
Zechen Zhang
H. Sompolinsky
KELM
CLL
RALM
104
1
0
30 Apr 2025
QuIM-RAG: Advancing Retrieval-Augmented Generation with Inverted Question Matching for Enhanced QA Performance
Binita Saha
Utsha Saha
Muhammad Zubair Malik
RALM
3DV
56
2
0
06 Jan 2025
Evaluating Concurrent Robustness of Language Models Across Diverse Challenge Sets
Evaluating Concurrent Robustness of Language Models Across Diverse Challenge Sets
Vatsal Gupta
Pranshu Pandya
Tushar Kataria
Vivek Gupta
Dan Roth
AAML
55
1
0
03 Jan 2025
Gradient Localization Improves Lifelong Pretraining of Language Models
Gradient Localization Improves Lifelong Pretraining of Language Models
Jared Fernandez
Yonatan Bisk
Emma Strubell
KELM
36
1
0
07 Nov 2024
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
L. Wang
Sheng Chen
Linnan Jiang
Shu Pan
Runze Cai
Sen Yang
Fei Yang
49
3
0
24 Oct 2024
REFINE on Scarce Data: Retrieval Enhancement through Fine-Tuning via
  Model Fusion of Embedding Models
REFINE on Scarce Data: Retrieval Enhancement through Fine-Tuning via Model Fusion of Embedding Models
Ambuje Gupta
Mrinal Rawat
Andreas Stolcke
Roberto Pieraccini
RALM
21
1
0
16 Oct 2024
MoFO: Momentum-Filtered Optimizer for Mitigating Forgetting in LLM Fine-Tuning
MoFO: Momentum-Filtered Optimizer for Mitigating Forgetting in LLM Fine-Tuning
Yupeng Chen
Senmiao Wang
Zhihang Lin
Zhihang Lin
Yushun Zhang
Tian Ding
Ruoyu Sun
Ruoyu Sun
CLL
72
1
0
30 Jul 2024
Train-Attention: Meta-Learning Where to Focus in Continual Knowledge Learning
Train-Attention: Meta-Learning Where to Focus in Continual Knowledge Learning
Yeongbin Seo
Dongha Lee
Jinyoung Yeo
CLL
KELM
87
1
0
24 Jul 2024
Leveraging Logical Rules in Knowledge Editing: A Cherry on the Top
Leveraging Logical Rules in Knowledge Editing: A Cherry on the Top
Keyuan Cheng
Muhammad Asif Ali
Shu Yang
Gang Lin
Yuxuan Zhai
Haoyang Fei
Ke Xu
Lu Yu
Lijie Hu
Di Wang
KELM
34
7
0
24 May 2024
Modality Translation for Object Detection Adaptation Without Forgetting
  Prior Knowledge
Modality Translation for Object Detection Adaptation Without Forgetting Prior Knowledge
H. R. Medeiros
Masih Aminbeidokhti
F. Guerrero-Peña
David Latortue
Eric Granger
M. Pedersoli
VLM
45
2
0
01 Apr 2024
Evaluating the Factuality of Large Language Models using Large-Scale
  Knowledge Graphs
Evaluating the Factuality of Large Language Models using Large-Scale Knowledge Graphs
Xiaoze Liu
Feijie Wu
Tianyang Xu
Zhuo Chen
Yichi Zhang
Xiaoqian Wang
Jing Gao
HILM
39
8
0
01 Apr 2024
Improving Sequential Recommendations with LLMs
Improving Sequential Recommendations with LLMs
Artun Boz
Wouter Zorgdrager
Zoe Kotti
Jesse Harte
Panos Louridas
Dietmar Jannach
Vassilios Karakoidas
Marios Fragkoulis
KELM
LRM
62
4
0
02 Feb 2024
ChartAssisstant: A Universal Chart Multimodal Language Model via
  Chart-to-Table Pre-training and Multitask Instruction Tuning
ChartAssisstant: A Universal Chart Multimodal Language Model via Chart-to-Table Pre-training and Multitask Instruction Tuning
Fanqing Meng
Wenqi Shao
Quanfeng Lu
Peng Gao
Kaipeng Zhang
Yu Qiao
Ping Luo
27
45
0
04 Jan 2024
Dynamic Corrective Self-Distillation for Better Fine-Tuning of
  Pretrained Models
Dynamic Corrective Self-Distillation for Better Fine-Tuning of Pretrained Models
Ibtihel Amara
Vinija Jain
Aman Chadha
32
0
0
12 Dec 2023
Online Continual Knowledge Learning for Language Models
Online Continual Knowledge Learning for Language Models
Yuhao Wu
Tongjun Shi
Karthick Sharma
Chun Seah
Shuhao Zhang
CLL
KELM
23
4
0
16 Nov 2023
UniTabE: A Universal Pretraining Protocol for Tabular Foundation Model
  in Data Science
UniTabE: A Universal Pretraining Protocol for Tabular Foundation Model in Data Science
Yazheng Yang
Yuqi Wang
Guangyi Liu
Ledell Yu Wu
Qi Liu
LMTD
32
16
0
18 Jul 2023
Contrastive Learning Reduces Hallucination in Conversations
Contrastive Learning Reduces Hallucination in Conversations
Weiwei Sun
Zhengliang Shi
Shen Gao
Pengjie Ren
Maarten de Rijke
Z. Ren
34
62
0
20 Dec 2022
G-MAP: General Memory-Augmented Pre-trained Language Model for Domain
  Tasks
G-MAP: General Memory-Augmented Pre-trained Language Model for Domain Tasks
Zhongwei Wan
Yichun Yin
Wei Zhang
Jiaxin Shi
Lifeng Shang
Guangyong Chen
Xin Jiang
Qun Liu
VLM
CLL
28
16
0
07 Dec 2022
Parameter-Efficient Tuning Makes a Good Classification Head
Parameter-Efficient Tuning Makes a Good Classification Head
Zhuoyi Yang
Ming Ding
Yanhui Guo
Qingsong Lv
Jie Tang
VLM
35
14
0
30 Oct 2022
ROSE: Robust Selective Fine-tuning for Pre-trained Language Models
ROSE: Robust Selective Fine-tuning for Pre-trained Language Models
Lan Jiang
Hao Zhou
Yankai Lin
Peng Li
Jie Zhou
R. Jiang
AAML
32
8
0
18 Oct 2022
On the Impossible Safety of Large AI Models
On the Impossible Safety of Large AI Models
El-Mahdi El-Mhamdi
Sadegh Farhadkhani
R. Guerraoui
Nirupam Gupta
L. Hoang
Rafael Pinot
Sébastien Rouault
John Stephan
30
31
0
30 Sep 2022
Parameter-Efficient Finetuning for Robust Continual Multilingual
  Learning
Parameter-Efficient Finetuning for Robust Continual Multilingual Learning
Kartikeya Badola
Shachi Dave
Partha P. Talukdar
CLL
KELM
39
7
0
14 Sep 2022
PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model
  Adaptation
PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model Adaptation
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
VLM
CLL
29
41
0
22 Aug 2022
PAC-Net: A Model Pruning Approach to Inductive Transfer Learning
PAC-Net: A Model Pruning Approach to Inductive Transfer Learning
Sanghoon Myung
I. Huh
Wonik Jang
Jae Myung Choe
Jisu Ryu
Daesin Kim
Kee-Eung Kim
C. Jeong
22
13
0
12 Jun 2022
Memorization Without Overfitting: Analyzing the Training Dynamics of
  Large Language Models
Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models
Kushal Tirumala
Aram H. Markosyan
Luke Zettlemoyer
Armen Aghajanyan
TDI
26
185
0
22 May 2022
Clinical Prompt Learning with Frozen Language Models
Clinical Prompt Learning with Frozen Language Models
Niall Taylor
Yi Zhang
Dan W Joyce
A. Nevado-Holgado
Andrey Kormilitzin
VLM
LM&MA
16
31
0
11 May 2022
Efficient Few-Shot Fine-Tuning for Opinion Summarization
Efficient Few-Shot Fine-Tuning for Opinion Summarization
Arthur Bravzinskas
Ramesh Nallapati
Mohit Bansal
Markus Dreyer
17
24
0
04 May 2022
TemporalWiki: A Lifelong Benchmark for Training and Evaluating
  Ever-Evolving Language Models
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models
Joel Jang
Seonghyeon Ye
Changho Lee
Sohee Yang
Joongbo Shin
Janghoon Han
Gyeonghun Kim
Minjoon Seo
CLL
KELM
27
91
0
29 Apr 2022
Plug-and-Play Adaptation for Continuously-updated QA
Plug-and-Play Adaptation for Continuously-updated QA
Kyungjae Lee
Wookje Han
Seung-won Hwang
Hwaran Lee
Joonsuk Park
Sang-Woo Lee
KELM
17
16
0
27 Apr 2022
Parameter-Efficient Abstractive Question Answering over Tables or Text
Parameter-Efficient Abstractive Question Answering over Tables or Text
Vaishali Pal
Evangelos Kanoulas
Maarten de Rijke
LMTD
19
14
0
07 Apr 2022
NoisyTune: A Little Noise Can Help You Finetune Pretrained Language
  Models Better
NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better
Chuhan Wu
Fangzhao Wu
Tao Qi
Yongfeng Huang
Xing Xie
25
58
0
24 Feb 2022
From Dense to Sparse: Contrastive Pruning for Better Pre-trained
  Language Model Compression
From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression
Runxin Xu
Fuli Luo
Chengyu Wang
Baobao Chang
Jun Huang
Songfang Huang
Fei Huang
VLM
27
25
0
14 Dec 2021
Kronecker Factorization for Preventing Catastrophic Forgetting in
  Large-scale Medical Entity Linking
Kronecker Factorization for Preventing Catastrophic Forgetting in Large-scale Medical Entity Linking
Denis Jered McInerney
Luyang Kong
Kristjan Arumae
Byron C. Wallace
Parminder Bhatia
CLL
24
1
0
11 Nov 2021
Adapting to the Long Tail: A Meta-Analysis of Transfer Learning Research
  for Language Understanding Tasks
Adapting to the Long Tail: A Meta-Analysis of Transfer Learning Research for Language Understanding Tasks
Aakanksha Naik
J. Lehman
Carolyn Rose
35
7
0
02 Nov 2021
Towards Continual Knowledge Learning of Language Models
Towards Continual Knowledge Learning of Language Models
Joel Jang
Seonghyeon Ye
Sohee Yang
Joongbo Shin
Janghoon Han
Gyeonghun Kim
Stanley Jungkyu Choi
Minjoon Seo
CLL
KELM
222
150
0
07 Oct 2021
Sequential Reptile: Inter-Task Gradient Alignment for Multilingual
  Learning
Sequential Reptile: Inter-Task Gradient Alignment for Multilingual Learning
Seanie Lee
Haebeom Lee
Juho Lee
S. Hwang
MoMe
CLL
38
16
0
06 Oct 2021
Improving Scheduled Sampling with Elastic Weight Consolidation for
  Neural Machine Translation
Improving Scheduled Sampling with Elastic Weight Consolidation for Neural Machine Translation
Michalis Korakakis
Andreas Vlachos
CLL
28
2
0
13 Sep 2021
Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
Prasetya Ajie Utama
N. Moosavi
Victor Sanh
Iryna Gurevych
AAML
59
35
0
09 Sep 2021
CausalBERT: Injecting Causal Knowledge Into Pre-trained Models with
  Minimal Supervision
CausalBERT: Injecting Causal Knowledge Into Pre-trained Models with Minimal Supervision
Zhongyang Li
Xiao Ding
Kuo Liao
Bing Qin
Ting Liu
CML
29
17
0
21 Jul 2021
A Closer Look at How Fine-tuning Changes BERT
A Closer Look at How Fine-tuning Changes BERT
Yichu Zhou
Vivek Srikumar
26
63
0
27 Jun 2021
Do Language Models Perform Generalizable Commonsense Inference?
Do Language Models Perform Generalizable Commonsense Inference?
Peifeng Wang
Filip Ilievski
Muhao Chen
Xiang Ren
ReLM
LRM
20
19
0
22 Jun 2021
Continual Learning for Natural Language Generation in Task-oriented
  Dialog Systems
Continual Learning for Natural Language Generation in Task-oriented Dialog Systems
Fei Mi
Liangwei Chen
Mengjie Zhao
Minlie Huang
Boi Faltings
CLL
KELM
17
68
0
02 Oct 2020
Transferability of Natural Language Inference to Biomedical Question
  Answering
Transferability of Natural Language Inference to Biomedical Question Answering
Minbyul Jeong
Mujeen Sung
Gangwoo Kim
Donghyeon Kim
Wonjin Yoon
J. Yoo
Jaewoo Kang
19
37
0
01 Jul 2020
Mixout: Effective Regularization to Finetune Large-scale Pretrained
  Language Models
Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Cheolhyoung Lee
Kyunghyun Cho
Wanmo Kang
MoE
249
205
0
25 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,956
0
20 Apr 2018
1