ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.03831
13
20

Multi-Head Adapter Routing for Cross-Task Generalization

7 November 2022
Lucas Caccia
E. Ponti
Zhan Su
Matheus Pereira
Nicolas Le Roux
Alessandro Sordoni
ArXivPDFHTML
Abstract

Parameter-efficient fine-tuning (PEFT) for cross-task generalization consists in pre-training adapters on a multi-task training set before few-shot adaptation to test tasks. Polytropon [Ponti et al., 2023] (Poly\texttt{Poly}Poly) jointly learns an inventory of adapters and a routing function that selects a (variable-size) subset of adapters for each task during both pre-training and few-shot adaptation. In this paper, we investigate the role that adapter routing plays in its success and design new variants based on our findings. First, we build on the intuition that finer-grained routing provides more expressivity. Hence, we propose MHR\texttt{MHR}MHR (Multi-Head Routing) which combines subsets of adapter parameters and outperforms Poly\texttt{Poly}Poly under a comparable parameter budget; by only fine-tuning the routing function and not the adapters (MHR\texttt{MHR}MHR-zzz) we achieve competitive performance with extreme parameter efficiency. Second, we find that Poly\texttt{Poly}Poly/MHR\texttt{MHR}MHR performance is a result of better multi-task optimization, rather than modular inductive biases that facilitate adapter recombination and local adaptation, as previously hypothesized. In fact, we find that MHR\texttt{MHR}MHR exhibits high gradient alignment between training tasks. We find that routing is most beneficial during multi-task pre-training rather than during few-shot adaptation and propose MHR\texttt{MHR}MHR-μ\muμ, which discards routing and fine-tunes the average of the pre-trained adapters on each downstream tasks. This establishes MHR\texttt{MHR}MHR-μ\muμ as an effective method for single-adapter fine-tuning. We also show that MHR\texttt{MHR}MHR-μ\muμ can be used as an effective zero-shot transfer method by training the average of the pre-trained adapters for a few additional steps on the multi-task training set: this yields gains up to 3% on absolute accuracy w.r.t. the baselines.

View on arXiv
Comments on this paper