40
2

Learning to Route for Dynamic Adapter Composition in Continual Learning with Language Models

Abstract

Parameter-efficient fine-tuning (PEFT) methods are increasingly used with pre-trained language models (PLMs) for continual learning (CL). These methods involve training a PEFT module for each new task and using similarity-based selection to route modules during inference. However, they face two major limitations: 1) interference with already learned modules and 2) suboptimal routing when composing modules. In this paper, we introduce a method that isolates the training of PEFT modules for task specialization. Then, before evaluation, it learns to compose the previously learned modules by training a router that leverages samples from a small memory. We evaluate our method in two CL setups using several benchmarks. Our results show that our method provides a better composition of PEFT modules, leading to better generalization and performance compared to previous methods.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.