Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2303.01081
Cited By
Can BERT Refrain from Forgetting on Sequential Tasks? A Probing Study
2 March 2023
Mingxu Tao
Yansong Feng
Dongyan Zhao
CLL
KELM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Can BERT Refrain from Forgetting on Sequential Tasks? A Probing Study"
8 / 8 papers shown
Title
Unlocking the Potential of Model Merging for Low-Resource Languages
Mingxu Tao
Chen Zhang
Quzhe Huang
Tianyao Ma
Songfang Huang
Dongyan Zhao
Yansong Feng
CLL
MoMe
27
3
0
04 Jul 2024
Continual Learning of Large Language Models: A Comprehensive Survey
Haizhou Shi
Zihao Xu
Hengyi Wang
Weiyi Qin
Wenyuan Wang
Yibin Wang
Zifeng Wang
Sayna Ebrahimi
Hao Wang
CLL
KELM
LRM
46
64
0
25 Apr 2024
Balancing the Causal Effects in Class-Incremental Learning
Junhao Zheng
Ruiyan Wang
Chongzhi Zhang
Hu Feng
Qianli Ma
CML
CLL
31
0
0
15 Feb 2024
Can LLMs Learn New Concepts Incrementally without Forgetting?
Junhao Zheng
Shengjie Qiu
Qianli Ma
CLL
35
0
0
13 Feb 2024
What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement
Xisen Jin
Xiang Ren
KELM
CLL
23
6
0
02 Feb 2024
Learn or Recall? Revisiting Incremental Learning with Pre-trained Language Models
Junhao Zheng
Shengjie Qiu
Qianli Ma
25
9
0
13 Dec 2023
Lawyer LLaMA Technical Report
Quzhe Huang
Mingxu Tao
Chen Zhang
Zhenwei An
Cong Jiang
Zhibin Chen
Zirui Wu
Yansong Feng
ELM
ALM
AILaw
34
49
0
24 May 2023
Probing Classifiers: Promises, Shortcomings, and Advances
Yonatan Belinkov
226
405
0
24 Feb 2021
1