ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.18865
  4. Cited By
Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient
  Tuning

Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning

29 February 2024
Weijieying Ren
Xinlong Li
Lei Wang
Tianxiang Zhao
Wei Qin
    CLL
    KELM
ArXivPDFHTML

Papers citing "Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning"

11 / 11 papers shown
Title
Reinforced Interactive Continual Learning via Real-time Noisy Human Feedback
Reinforced Interactive Continual Learning via Real-time Noisy Human Feedback
Yutao Yang
Jie Zhou
Junsong Li
Qianjun Pan
Bihao Zhan
Qin Chen
Xipeng Qiu
Liang He
CLL
29
0
0
15 May 2025
Pre-Act: Multi-Step Planning and Reasoning Improves Acting in LLM Agents
Pre-Act: Multi-Step Planning and Reasoning Improves Acting in LLM Agents
Mrinal Rawat
Ambuje Gupta
Rushil Goomer
Alessandro Di Bari
Neha Gupta
Roberto Pieraccini
LLMAG
LRM
26
0
0
15 May 2025
Task-Core Memory Management and Consolidation for Long-term Continual Learning
Task-Core Memory Management and Consolidation for Long-term Continual Learning
Tianyu Huai
Jie Zhou
Yuxuan Cai
Qin Chen
Wen Wu
Xingjiao Wu
Xipeng Qiu
Liang He
CLL
33
0
0
15 May 2025
Emotion-Qwen: Training Hybrid Experts for Unified Emotion and General Vision-Language Understanding
Emotion-Qwen: Training Hybrid Experts for Unified Emotion and General Vision-Language Understanding
Dawei Huang
Qing Li
Chuan Yan
Zebang Cheng
Jiaming Ji
Xiang Li
Yangqiu Song
Xidong Wang
Zheng Lian
Xiaojiang Peng
34
0
0
10 May 2025
SECURA: Sigmoid-Enhanced CUR Decomposition with Uninterrupted Retention and Low-Rank Adaptation in Large Language Models
SECURA: Sigmoid-Enhanced CUR Decomposition with Uninterrupted Retention and Low-Rank Adaptation in Large Language Models
Yuxuan Zhang
CLL
ALM
73
1
0
25 Feb 2025
Refining Input Guardrails: Enhancing LLM-as-a-Judge Efficiency Through Chain-of-Thought Fine-Tuning and Alignment
Refining Input Guardrails: Enhancing LLM-as-a-Judge Efficiency Through Chain-of-Thought Fine-Tuning and Alignment
Melissa Kazemi Rad
Huy Nghiem
Andy Luo
Sahil Wadhwa
Mohammad Sorower
Stephen Rawls
AAML
96
2
0
22 Jan 2025
Tensor Product Attention Is All You Need
Tensor Product Attention Is All You Need
Yifan Zhang
Yifeng Liu
Huizhuo Yuan
Zhen Qin
Yang Yuan
Q. Gu
Andrew Chi-Chih Yao
96
9
0
11 Jan 2025
Human-inspired Perspectives: A Survey on AI Long-term Memory
Human-inspired Perspectives: A Survey on AI Long-term Memory
Zihong He
Weizhe Lin
Hao Zheng
Fan Zhang
Matt Jones
Laurence Aitchison
X. Xu
Miao Liu
Per Ola Kristensson
Junxiao Shen
77
2
0
01 Nov 2024
MoFO: Momentum-Filtered Optimizer for Mitigating Forgetting in LLM Fine-Tuning
MoFO: Momentum-Filtered Optimizer for Mitigating Forgetting in LLM Fine-Tuning
Yupeng Chen
Senmiao Wang
Zhihang Lin
Zhihang Lin
Yushun Zhang
Tian Ding
Ruoyu Sun
Ruoyu Sun
CLL
83
3
0
30 Jul 2024
Can LLMs Learn New Concepts Incrementally without Forgetting?
Can LLMs Learn New Concepts Incrementally without Forgetting?
Junhao Zheng
Shengjie Qiu
Qianli Ma
CLL
40
0
0
13 Feb 2024
Learn to Explain: Multimodal Reasoning via Thought Chains for Science
  Question Answering
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
Pan Lu
Swaroop Mishra
Tony Xia
Liang Qiu
Kai-Wei Chang
Song-Chun Zhu
Oyvind Tafjord
Peter Clark
Ashwin Kalyan
ELM
ReLM
LRM
211
1,124
0
20 Sep 2022
1