ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.00247
  4. Cited By
AdapterFusion: Non-Destructive Task Composition for Transfer Learning
v1v2v3 (latest)

AdapterFusion: Non-Destructive Task Composition for Transfer Learning

1 May 2020
Jonas Pfeiffer
Aishwarya Kamath
Andreas Rucklé
Kyunghyun Cho
Iryna Gurevych
    CLLMoMe
ArXiv (abs)PDFHTML

Papers citing "AdapterFusion: Non-Destructive Task Composition for Transfer Learning"

50 / 553 papers shown
Title
Independent Component Alignment for Multi-Task Learning
Independent Component Alignment for Multi-Task Learning
Dmitry Senushkin
Nikolay Patakin
Arseny Kuznetsov
Anton Konushin
CVBM
102
46
0
30 May 2023
Domain Specialization as the Key to Make Large Language Models
  Disruptive: A Comprehensive Survey
Domain Specialization as the Key to Make Large Language Models Disruptive: A Comprehensive Survey
Chen Ling
Xujiang Zhao
Jiaying Lu
Chengyuan Deng
Can Zheng
...
Chris White
Quanquan Gu
Jian Pei
Carl Yang
Liang Zhao
ALM
169
140
0
30 May 2023
A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark
  Datasets
A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets
Md Tahmid Rahman Laskar
M Saiful Bari
Mizanur Rahman
Md Amran Hossen Bhuiyan
Shafiq Joty
J. Huang
LM&MAELMALM
131
193
0
29 May 2023
One Network, Many Masks: Towards More Parameter-Efficient Transfer
  Learning
One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning
Guangtao Zeng
Peiyuan Zhang
Wei Lu
95
22
0
28 May 2023
Plug-and-Play Document Modules for Pre-trained Models
Plug-and-Play Document Modules for Pre-trained Models
Chaojun Xiao
Zhengyan Zhang
Xu Han
Chi-Min Chan
Yankai Lin
Zhiyuan Liu
Xiangyang Li
Zhonghua Li
Bo Zhao
Maosong Sun
KELM
107
6
0
28 May 2023
Parameter-Efficient Fine-Tuning without Introducing New Latency
Parameter-Efficient Fine-Tuning without Introducing New Latency
Baohao Liao
Yan Meng
Christof Monz
59
56
0
26 May 2023
TADA: Task-Agnostic Dialect Adapters for English
TADA: Task-Agnostic Dialect Adapters for English
William B. Held
Caleb Ziems
Diyi Yang
70
13
0
26 May 2023
Neural Architecture Search for Parameter-Efficient Fine-tuning of Large
  Pre-trained Language Models
Neural Architecture Search for Parameter-Efficient Fine-tuning of Large Pre-trained Language Models
Neal Lawton
Anoop Kumar
Govind Thattai
Aram Galstyan
Greg Ver Steeg
47
19
0
26 May 2023
mmT5: Modular Multilingual Pre-Training Solves Source Language
  Hallucinations
mmT5: Modular Multilingual Pre-Training Solves Source Language Hallucinations
Jonas Pfeiffer
Francesco Piccinno
Massimo Nicosia
Xinyi Wang
Machel Reid
Sebastian Ruder
VLMLRM
106
31
0
23 May 2023
How to Solve Few-Shot Abusive Content Detection Using the Data We
  Actually Have
How to Solve Few-Shot Abusive Content Detection Using the Data We Actually Have
Viktor Hangya
Alexander Fraser
67
0
0
23 May 2023
When Does Aggregating Multiple Skills with Multi-Task Learning Work? A
  Case Study in Financial NLP
When Does Aggregating Multiple Skills with Multi-Task Learning Work? A Case Study in Financial NLP
Jingwei Ni
Zhijing Jin
Qian Wang
Mrinmaya Sachan
Markus Leippold
AIFin
68
6
0
23 May 2023
Modular Domain Adaptation for Conformer-Based Streaming ASR
Modular Domain Adaptation for Conformer-Based Streaming ASR
Qiujia Li
Yue Liu
DongSeon Hwang
Tara N. Sainath
P. M. Mengibar
80
12
0
22 May 2023
DADA: Dialect Adaptation via Dynamic Aggregation of Linguistic Rules
DADA: Dialect Adaptation via Dynamic Aggregation of Linguistic Rules
Yanchen Liu
William B. Held
Diyi Yang
148
11
0
22 May 2023
TADA: Efficient Task-Agnostic Domain Adaptation for Transformers
TADA: Efficient Task-Agnostic Domain Adaptation for Transformers
Chia-Chien Hung
Lukas Lange
Jannik Strötgen
94
10
0
22 May 2023
Communication Efficient Federated Learning for Multilingual Neural
  Machine Translation with Adapter
Communication Efficient Federated Learning for Multilingual Neural Machine Translation with Adapter
Yi Liu
Xiaohan Bi
Lei Li
Sishuo Chen
Wenkai Yang
Xu Sun
FedML
78
14
0
21 May 2023
G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning
  for Graph Transformer Networks
G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning for Graph Transformer Networks
Anchun Gui
Jinqiang Ye
Han Xiao
80
22
0
17 May 2023
Parameter-Efficient Fine-Tuning with Layer Pruning on Free-Text
  Sequence-to-Sequence Modeling
Parameter-Efficient Fine-Tuning with Layer Pruning on Free-Text Sequence-to-Sequence Modeling
Y. Zhu
Xuebing Yang
Yuanyuan Wu
Wensheng Zhang
MedIm
52
2
0
15 May 2023
Learning to Reason over Scene Graphs: A Case Study of Finetuning GPT-2
  into a Robot Language Model for Grounded Task Planning
Learning to Reason over Scene Graphs: A Case Study of Finetuning GPT-2 into a Robot Language Model for Grounded Task Planning
Georgia Chalvatzaki
A. Younes
Daljeet Nandha
An T. Le
Leonardo F. R. Ribeiro
Iryna Gurevych
LM&RoLRMLLMAG
114
31
0
12 May 2023
A Comprehensive Analysis of Adapter Efficiency
A Comprehensive Analysis of Adapter Efficiency
Nandini Mundra
Sumanth Doddapaneni
Raj Dabre
Anoop Kunchukuttan
Ratish Puduppully
Mitesh M. Khapra
62
11
0
12 May 2023
SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition
  (MultiCoNER 2)
SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2)
B. Fetahu
Sudipta Kar
Zhiyu Zoey Chen
Oleg Rokhlenko
S. Malmasi
89
56
0
11 May 2023
Residual Prompt Tuning: Improving Prompt Tuning with Residual
  Reparameterization
Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization
Anastasia Razdaibiedina
Yuning Mao
Rui Hou
Madian Khabsa
M. Lewis
Jimmy Ba
Amjad Almahairi
VLM
81
52
0
06 May 2023
Now It Sounds Like You: Learning Personalized Vocabulary On Device
Now It Sounds Like You: Learning Personalized Vocabulary On Device
Sida Wang
Ashish Shenoy
P. Chuang
John Nguyen
VLM
93
3
0
05 May 2023
Personalize Segment Anything Model with One Shot
Personalize Segment Anything Model with One Shot
Renrui Zhang
Zhengkai Jiang
Ziyu Guo
Shilin Yan
Junting Pan
Xianzheng Ma
Hao Dong
Peng Gao
Hongsheng Li
MLLMVLM
124
219
0
04 May 2023
Consolidator: Mergeable Adapter with Grouped Connections for Visual
  Adaptation
Consolidator: Mergeable Adapter with Grouped Connections for Visual Adaptation
Tianxiang Hao
Hui Chen
Yuchen Guo
Guiguang Ding
132
16
0
30 Apr 2023
$π$-Tuning: Transferring Multimodal Foundation Models with Optimal
  Multi-task Interpolation
πππ-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation
Chengyue Wu
Teng Wang
Yixiao Ge
Zeyu Lu
Rui-Zhi Zhou
Ying Shan
Ping Luo
MoMe
145
37
0
27 Apr 2023
PVP: Pre-trained Visual Parameter-Efficient Tuning
PVP: Pre-trained Visual Parameter-Efficient Tuning
Zhao Song
Ke Yang
Naiyang Guan
Junjie Zhu
Peng Qiao
Qingyong Hu
VPVLMVLM
71
3
0
26 Apr 2023
I2I: Initializing Adapters with Improvised Knowledge
I2I: Initializing Adapters with Improvised Knowledge
Tejas Srinivasan
Furong Jia
Mohammad Rostami
Jesse Thomason
CLL
111
6
0
04 Apr 2023
Strong Baselines for Parameter Efficient Few-Shot Fine-tuning
Strong Baselines for Parameter Efficient Few-Shot Fine-tuning
S. Basu
Daniela Massiceti
S. Hu
Soheil Feizi
VLM
85
34
0
04 Apr 2023
UKP-SQuARE v3: A Platform for Multi-Agent QA Research
UKP-SQuARE v3: A Platform for Multi-Agent QA Research
Haritz Puerto
Tim Baumgärtner
Rachneet Sachdeva
Haishuo Fang
Haotian Zhang
Sewin Tariverdian
Kexin Wang
Iryna Gurevych
96
2
0
31 Mar 2023
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init
  Attention
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Renrui Zhang
Jiaming Han
Chris Liu
Peng Gao
Aojun Zhou
Xiangfei Hu
Shilin Yan
Pan Lu
Hongsheng Li
Yu Qiao
MLLM
189
788
0
28 Mar 2023
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
Vladislav Lialin
Vijeta Deshpande
Anna Rumshisky
104
179
0
28 Mar 2023
Learning Expressive Prompting With Residuals for Vision Transformers
Learning Expressive Prompting With Residuals for Vision Transformers
Rajshekhar Das
Yonatan Dukler
Avinash Ravichandran
A. Swaminathan
VLMVPVLM
71
22
0
27 Mar 2023
Frame Flexible Network
Frame Flexible Network
Yitian Zhang
Yue Bai
Chang Liu
Huan Wang
Sheng Li
Yun Fu
52
4
0
26 Mar 2023
BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning
BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning
Changdae Oh
Hyeji Hwang
Hee-young Lee
Yongtaek Lim
Geunyoung Jung
Jiyoung Jung
Hosik Choi
Kyungwoo Song
VLMVPVLM
149
62
0
26 Mar 2023
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning
Qingru Zhang
Minshuo Chen
Alexander Bukharin
Nikos Karampatziakis
Pengcheng He
Yu Cheng
Weizhu Chen
Tuo Zhao
71
129
0
18 Mar 2023
LION: Implicit Vision Prompt Tuning
LION: Implicit Vision Prompt Tuning
Haixin Wang
Jianlong Chang
Xiao Luo
Jinan Sun
Zhouchen Lin
Qi Tian
VLMMLLMVPVLM
81
23
0
17 Mar 2023
Revisiting Class-Incremental Learning with Pre-Trained Models:
  Generalizability and Adaptivity are All You Need
Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need
Da-Wei Zhou
Han-Jia Ye
De-Chuan Zhan
Ziwei Liu
CLL
108
111
0
13 Mar 2023
An Overview on Language Models: Recent Developments and Outlook
An Overview on Language Models: Recent Developments and Outlook
Chengwei Wei
Yun Cheng Wang
Bin Wang
C.-C. Jay Kuo
95
47
0
10 Mar 2023
Extending the Pre-Training of BLOOM for Improved Support of Traditional
  Chinese: Models, Methods and Results
Extending the Pre-Training of BLOOM for Improved Support of Traditional Chinese: Models, Methods and Results
Philipp Ennen
Po-Chun Hsu
Chan-Jan Hsu
Chang-Le Liu
Yen-Chen Wu
Yin-Hsiang Liao
Chin-Tung Lin
Da-shan Shiu
Wei-Yun Ma
OSLMVLMAI4CE
99
11
0
08 Mar 2023
Towards Zero-Shot Functional Compositionality of Language Models
Towards Zero-Shot Functional Compositionality of Language Models
Hangyeol Yu
Myeongho Jeong
Jamin Shin
Hyeongdon Moon
Juneyoung Park
Seungtaek Choi
82
1
0
06 Mar 2023
MixPHM: Redundancy-Aware Parameter-Efficient Tuning for Low-Resource
  Visual Question Answering
MixPHM: Redundancy-Aware Parameter-Efficient Tuning for Low-Resource Visual Question Answering
Jingjing Jiang
Nanning Zheng
MoE
122
6
0
02 Mar 2023
Evaluating Parameter-Efficient Transfer Learning Approaches on SURE
  Benchmark for Speech Understanding
Evaluating Parameter-Efficient Transfer Learning Approaches on SURE Benchmark for Speech Understanding
Yingting Li
Ambuj Mehrish
Shuaijiang Zhao
Rishabh Bhardwaj
Amir Zadeh
Navonil Majumder
Rada Mihalcea
Soujanya Poria
AAML
66
18
0
02 Mar 2023
Learning to Grow Pretrained Models for Efficient Transformer Training
Learning to Grow Pretrained Models for Efficient Transformer Training
Peihao Wang
Yikang Shen
Lucas Torroba Hennigen
P. Greengard
Leonid Karlinsky
Rogerio Feris
David D. Cox
Zhangyang Wang
Yoon Kim
75
56
0
02 Mar 2023
Rethinking Efficient Tuning Methods from a Unified Perspective
Rethinking Efficient Tuning Methods from a Unified Perspective
Zeyinzi Jiang
Chaojie Mao
Ziyuan Huang
Yiliang Lv
Deli Zhao
Jingren Zhou
85
11
0
01 Mar 2023
SMoA: Sparse Mixture of Adapters to Mitigate Multiple Dataset Biases
SMoA: Sparse Mixture of Adapters to Mitigate Multiple Dataset Biases
Yanchen Liu
Jing Yang
Yan Chen
Jing Liu
Huaqin Wu
MoE
94
2
0
28 Feb 2023
Modular Deep Learning
Modular Deep Learning
Jonas Pfeiffer
Sebastian Ruder
Ivan Vulić
Edoardo Ponti
MoMeOOD
161
80
0
22 Feb 2023
Task-Specific Skill Localization in Fine-tuned Language Models
Task-Specific Skill Localization in Fine-tuned Language Models
A. Panigrahi
Nikunj Saunshi
Haoyu Zhao
Sanjeev Arora
MoMe
104
75
0
13 Feb 2023
Parameter-efficient Modularised Bias Mitigation via AdapterFusion
Parameter-efficient Modularised Bias Mitigation via AdapterFusion
Deepak Kumar
Oleg Lesota
George Zerveas
Daniel Cohen
Carsten Eickhoff
Markus Schedl
Navid Rekabsaz
MoMeKELM
91
28
0
13 Feb 2023
Divergence-Based Domain Transferability for Zero-Shot Classification
Divergence-Based Domain Transferability for Zero-Shot Classification
Alexander Pugantsov
R. McCreadie
VLM
37
0
0
11 Feb 2023
Continual Pre-training of Language Models
Continual Pre-training of Language Models
Zixuan Ke
Yijia Shao
Haowei Lin
Tatsuya Konishi
Gyuhak Kim
Bin Liu
CLLKELM
159
140
0
07 Feb 2023
Previous
123...101112789
Next