ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.07027
  4. Cited By
AdapterSoup: Weight Averaging to Improve Generalization of Pretrained
  Language Models

AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models

14 February 2023
Alexandra Chronopoulou
Matthew E. Peters
Alexander Fraser
Jesse Dodge
    MoMe
ArXivPDFHTML

Papers citing "AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models"

5 / 55 papers shown
Title
ADAPTERMIX: Exploring the Efficacy of Mixture of Adapters for
  Low-Resource TTS Adaptation
ADAPTERMIX: Exploring the Efficacy of Mixture of Adapters for Low-Resource TTS Adaptation
Ambuj Mehrish
Abhinav Ramesh Kashyap
Yingting Li
Navonil Majumder
Soujanya Poria
24
7
0
29 May 2023
Scaling Expert Language Models with Unsupervised Domain Discovery
Scaling Expert Language Models with Unsupervised Domain Discovery
Suchin Gururangan
Margaret Li
M. Lewis
Weijia Shi
Tim Althoff
Noah A. Smith
Luke Zettlemoyer
MoE
25
46
0
24 Mar 2023
Merging Decision Transformers: Weight Averaging for Forming Multi-Task
  Policies
Merging Decision Transformers: Weight Averaging for Forming Multi-Task Policies
Daniel Lawson
A. H. Qureshi
MoMe
OffRL
28
13
0
14 Mar 2023
Multi-Head Adapter Routing for Cross-Task Generalization
Multi-Head Adapter Routing for Cross-Task Generalization
Lucas Caccia
E. Ponti
Zhan Su
Matheus Pereira
Nicolas Le Roux
Alessandro Sordoni
18
20
0
07 Nov 2022
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
261
4,489
0
23 Jan 2020
Previous
12