ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.00247
  4. Cited By
AdapterFusion: Non-Destructive Task Composition for Transfer Learning
v1v2v3 (latest)

AdapterFusion: Non-Destructive Task Composition for Transfer Learning

1 May 2020
Jonas Pfeiffer
Aishwarya Kamath
Andreas Rucklé
Kyunghyun Cho
Iryna Gurevych
    CLLMoMe
ArXiv (abs)PDFHTML

Papers citing "AdapterFusion: Non-Destructive Task Composition for Transfer Learning"

50 / 553 papers shown
Title
Frozen CLIP Models are Efficient Video Learners
Frozen CLIP Models are Efficient Video Learners
Ziyi Lin
Shijie Geng
Renrui Zhang
Peng Gao
Gerard de Melo
Xiaogang Wang
Jifeng Dai
Yu Qiao
Hongsheng Li
CLIPVLM
98
209
0
06 Aug 2022
Pro-tuning: Unified Prompt Tuning for Vision Tasks
Pro-tuning: Unified Prompt Tuning for Vision Tasks
Xing Nie
Bolin Ni
Jianlong Chang
Gaomeng Meng
Chunlei Huo
Zhaoxiang Zhang
Shiming Xiang
Qi Tian
Chunhong Pan
AAMLVPVLMVLM
133
76
0
28 Jul 2022
Contrastive Adapters for Foundation Model Group Robustness
Contrastive Adapters for Foundation Model Group Robustness
Michael Zhang
Christopher Ré
VLM
59
64
0
14 Jul 2022
Convolutional Bypasses Are Better Vision Transformer Adapters
Convolutional Bypasses Are Better Vision Transformer Adapters
Shibo Jie
Zhi-Hong Deng
VPVLM
94
137
0
14 Jul 2022
FairDistillation: Mitigating Stereotyping in Language Models
FairDistillation: Mitigating Stereotyping in Language Models
Pieter Delobelle
Bettina Berendt
71
8
0
10 Jul 2022
Meta-Learning the Difference: Preparing Large Language Models for
  Efficient Adaptation
Meta-Learning the Difference: Preparing Large Language Models for Efficient Adaptation
Zejiang Hou
Julian Salazar
George Polovets
79
15
0
07 Jul 2022
Continual Learning with Transformers for Image Classification
Continual Learning with Transformers for Image Classification
Beyza Ermis
Giovanni Zappella
Martin Wistuba
Aditya Rawal
Cédric Archambeau
CLL
83
22
0
28 Jun 2022
ST-Adapter: Parameter-Efficient Image-to-Video Transfer Learning
ST-Adapter: Parameter-Efficient Image-to-Video Transfer Learning
Junting Pan
Ziyi Lin
Xiatian Zhu
Jing Shao
Hongsheng Li
106
207
0
27 Jun 2022
GAAMA 2.0: An Integrated System that Answers Boolean and Extractive
  Questions
GAAMA 2.0: An Integrated System that Answers Boolean and Extractive Questions
Scott McCarley
Mihaela A. Bornea
Sara Rosenthal
Anthony Ferritto
Md Arafat Sultan
Avirup Sil
Radu Florian
42
1
0
16 Jun 2022
Sparse Structure Search for Parameter-Efficient Tuning
Sparse Structure Search for Parameter-Efficient Tuning
Shengding Hu
Zhen Zhang
Ning Ding
Yadao Wang
Yasheng Wang
Zhiyuan Liu
Maosong Sun
81
17
0
15 Jun 2022
TransVG++: End-to-End Visual Grounding with Language Conditioned Vision
  Transformer
TransVG++: End-to-End Visual Grounding with Language Conditioned Vision Transformer
Jiajun Deng
Zhengyuan Yang
Daqing Liu
Tianlang Chen
Wen-gang Zhou
Yanyong Zhang
Houqiang Li
Wanli Ouyang
ViT
113
57
0
14 Jun 2022
Modularized Transfer Learning with Multiple Knowledge Graphs for
  Zero-shot Commonsense Reasoning
Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning
Yu Jin Kim
Beong-woo Kwak
Youngwook Kim
Reinald Kim Amplayo
Seung-won Hwang
Jinyoung Yeo
LRM
66
14
0
08 Jun 2022
Modular and On-demand Bias Mitigation with Attribute-Removal Subnetworks
Modular and On-demand Bias Mitigation with Attribute-Removal Subnetworks
Lukas Hauzenberger
Shahed Masoudian
Deepak Kumar
Markus Schedl
Navid Rekabsaz
95
18
0
30 May 2022
AdaptFormer: Adapting Vision Transformers for Scalable Visual
  Recognition
AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition
Shoufa Chen
Chongjian Ge
Zhan Tong
Jiangliu Wang
Yibing Song
Jue Wang
Ping Luo
257
706
0
26 May 2022
Enhancing Continual Learning with Global Prototypes: Counteracting
  Negative Representation Drift
Enhancing Continual Learning with Global Prototypes: Counteracting Negative Representation Drift
Xueying Bai
Jinghuan Shang
Yifan Sun
Niranjan Balasubramanian
CLL
88
1
0
24 May 2022
Hyper-X: A Unified Hypernetwork for Multi-Task Multilingual Transfer
Hyper-X: A Unified Hypernetwork for Multi-Task Multilingual Transfer
Ahmet Üstün
Arianna Bisazza
G. Bouma
Gertjan van Noord
Sebastian Ruder
131
33
0
24 May 2022
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures
  of Soft Prompts
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts
Akari Asai
Mohammadreza Salehi
Matthew E. Peters
Hannaneh Hajishirzi
207
102
0
24 May 2022
When does Parameter-Efficient Transfer Learning Work for Machine
  Translation?
When does Parameter-Efficient Transfer Learning Work for Machine Translation?
Ahmet Üstün
Asa Cooper Stickland
97
7
0
23 May 2022
Stop Filtering: Multi-View Attribute-Enhanced Dialogue Learning
Stop Filtering: Multi-View Attribute-Enhanced Dialogue Learning
Yiwei Li
Bin Sun
Shaoxiong Feng
Kan Li
63
3
0
23 May 2022
FedAdapter: Efficient Federated Learning for Modern NLP
FedAdapter: Efficient Federated Learning for Modern NLP
Dongqi Cai
Yaozong Wu
Shangguang Wang
F. Lin
Mengwei Xu
FedMLAI4CE
74
23
0
20 May 2022
Lifting the Curse of Multilinguality by Pre-training Modular
  Transformers
Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Jonas Pfeiffer
Naman Goyal
Xi Lin
Xian Li
James Cross
Sebastian Riedel
Mikel Artetxe
LRM
113
146
0
12 May 2022
AdaVAE: Exploring Adaptive GPT-2s in Variational Auto-Encoders for
  Language Modeling
AdaVAE: Exploring Adaptive GPT-2s in Variational Auto-Encoders for Language Modeling
Haoqin Tu
Zhongliang Yang
Jinshuai Yang
Yong Huang
42
12
0
12 May 2022
AdapterBias: Parameter-efficient Token-dependent Representation Shift
  for Adapters in NLP Tasks
AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
Chin-Lun Fu
Zih-Ching Chen
Yun-Ru Lee
Hung-yi Lee
88
49
0
30 Apr 2022
On The Cross-Modal Transfer from Natural Language to Code through
  Adapter Modules
On The Cross-Modal Transfer from Natural Language to Code through Adapter Modules
Divyam Goel
Raman Grover
Fatemeh H. Fard
76
19
0
19 Apr 2022
Adapting BigScience Multilingual Model to Unseen Languages
Adapting BigScience Multilingual Model to Unseen Languages
Zheng-Xin Yong
Vassilina Nikoulina
85
5
0
11 Apr 2022
DualPrompt: Complementary Prompting for Rehearsal-free Continual
  Learning
DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning
Zifeng Wang
Zizhao Zhang
Sayna Ebrahimi
Ruoxi Sun
Han Zhang
...
Xiaoqi Ren
Guolong Su
Vincent Perot
Jennifer Dy
Tomas Pfister
CLLVLMVPVLM
140
506
0
10 Apr 2022
Efficient Extraction of Pathologies from C-Spine Radiology Reports using
  Multi-Task Learning
Efficient Extraction of Pathologies from C-Spine Radiology Reports using Multi-Task Learning
Arijit Sehanobish
Nathaniel K. Brown
Ishita Daga
Jayashri Pawar
Danielle Torres
Anasuya Das
M. Becker
Richard Herzog
Benjamin Odry
Ron Vianu
MedIm
62
2
0
09 Apr 2022
IDPG: An Instance-Dependent Prompt Generation Method
IDPG: An Instance-Dependent Prompt Generation Method
Zhuofeng Wu
Sinong Wang
Jiatao Gu
Rui Hou
Yuxiao Dong
V. Vydiswaran
Hao Ma
VLM
78
62
0
09 Apr 2022
Improved and Efficient Conversational Slot Labeling through Question
  Answering
Improved and Efficient Conversational Slot Labeling through Question Answering
Gabor Fuisz
Ivan Vulić
Samuel Gibbons
I. Casanueva
Paweł Budzianowski
93
11
0
05 Apr 2022
PERFECT: Prompt-free and Efficient Few-shot Learning with Language
  Models
PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models
Rabeeh Karimi Mahabadi
Luke Zettlemoyer
James Henderson
Marzieh Saeidi
Lambert Mathias
Ves Stoyanov
Majid Yazdani
VLM
83
72
0
03 Apr 2022
Proper Reuse of Image Classification Features Improves Object Detection
Proper Reuse of Image Classification Features Improves Object Detection
C. N. Vasconcelos
Vighnesh Birodkar
Vincent Dumoulin
VLM
101
32
0
01 Apr 2022
Parameter-efficient Model Adaptation for Vision Transformers
Parameter-efficient Model Adaptation for Vision Transformers
Xuehai He
Chunyuan Li
Pengchuan Zhang
Jianwei Yang
Xinze Wang
81
91
0
29 Mar 2022
A Scalable Model Specialization Framework for Training and Inference
  using Submodels and its Application to Speech Model Personalization
A Scalable Model Specialization Framework for Training and Inference using Submodels and its Application to Speech Model Personalization
Fadi Biadsy
Youzheng Chen
Xia Zhang
Oleg Rybakov
Andrew Rosenberg
Pedro J. Moreno
122
13
0
23 Mar 2022
Visual Prompt Tuning
Visual Prompt Tuning
Menglin Jia
Luming Tang
Bor-Chun Chen
Claire Cardie
Serge Belongie
Bharath Hariharan
Ser-Nam Lim
VLMVPVLM
255
1,661
0
23 Mar 2022
Continual Sequence Generation with Adaptive Compositional Modules
Continual Sequence Generation with Adaptive Compositional Modules
Yanzhe Zhang
Xuezhi Wang
Diyi Yang
KELMCLL
100
43
0
20 Mar 2022
Hierarchical Inductive Transfer for Continual Dialogue Learning
Hierarchical Inductive Transfer for Continual Dialogue Learning
Shaoxiong Feng
Xuancheng Ren
Kan Li
Xu Sun
CLL
60
4
0
20 Mar 2022
Hyperdecoders: Instance-specific decoders for multi-task NLP
Hyperdecoders: Instance-specific decoders for multi-task NLP
Hamish Ivison
Matthew E. Peters
AI4CE
121
22
0
15 Mar 2022
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for
  Pre-trained Language Models
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Ning Ding
Yujia Qin
Guang Yang
Fu Wei
Zonghan Yang
...
Jianfei Chen
Yang Liu
Jie Tang
Juan Li
Maosong Sun
132
205
0
14 Mar 2022
Towards Personalized Intelligence at Scale
Towards Personalized Intelligence at Scale
Yiping Kang
Ashish Mahendra
Christopher Clarke
Lingjia Tang
Jason Mars
81
1
0
13 Mar 2022
Memory Efficient Continual Learning with Transformers
Memory Efficient Continual Learning with Transformers
Beyza Ermis
Giovanni Zappella
Martin Wistuba
Aditya Rawal
Cédric Archambeau
CLL
84
46
0
09 Mar 2022
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models
Shengnan An
Yifei Li
Zeqi Lin
Qian Liu
Bei Chen
Qiang Fu
Weizhu Chen
Nanning Zheng
Jian-Guang Lou
VLMAAML
97
43
0
07 Mar 2022
Unfreeze with Care: Space-Efficient Fine-Tuning of Semantic Parsing
  Models
Unfreeze with Care: Space-Efficient Fine-Tuning of Semantic Parsing Models
Weiqi Sun
Haidar Khan
Nicolas Guenon des Mesnards
M. Rubino
Konstantine Arkoudas
124
5
0
05 Mar 2022
Controlling the Focus of Pretrained Language Generation Models
Controlling the Focus of Pretrained Language Generation Models
Jiabao Ji
Yoon Kim
James R. Glass
Tianxing He
121
5
0
02 Mar 2022
Combining Modular Skills in Multitask Learning
Combining Modular Skills in Multitask Learning
Edoardo Ponti
Alessandro Sordoni
Yoshua Bengio
Siva Reddy
MoE
89
38
0
28 Feb 2022
BERT WEAVER: Using WEight AVERaging to enable lifelong learning for
  transformer-based models in biomedical semantic search engines
BERT WEAVER: Using WEight AVERaging to enable lifelong learning for transformer-based models in biomedical semantic search engines
Lisa Langnickel
Alexander Schulz
Barbara Hammer
Juliane Fluck
CLLMedIm
79
3
0
21 Feb 2022
$\mathcal{Y}$-Tuning: An Efficient Tuning Paradigm for Large-Scale
  Pre-Trained Models via Label Representation Learning
Y\mathcal{Y}Y-Tuning: An Efficient Tuning Paradigm for Large-Scale Pre-Trained Models via Label Representation Learning
Yitao Liu
Chen An
Xipeng Qiu
93
18
0
20 Feb 2022
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Guanzheng Chen
Fangyu Liu
Zaiqiao Meng
Shangsong Liang
68
95
0
16 Feb 2022
ScaLA: Accelerating Adaptation of Pre-Trained Transformer-Based Language
  Models via Efficient Large-Batch Adversarial Noise
ScaLA: Accelerating Adaptation of Pre-Trained Transformer-Based Language Models via Efficient Large-Batch Adversarial Noise
Minjia Zhang
U. Niranjan
Yuxiong He
69
1
0
29 Jan 2022
Cascading Adaptors to Leverage English Data to Improve Performance of
  Question Answering for Low-Resource Languages
Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages
Hariom A. Pandya
Bhavik Ardeshna
Brijesh S. Bhatt
86
6
0
18 Dec 2021
Learning to Prompt for Continual Learning
Learning to Prompt for Continual Learning
Zifeng Wang
Zizhao Zhang
Chen-Yu Lee
Han Zhang
Ruoxi Sun
Xiaoqi Ren
Guolong Su
Vincent Perot
Jennifer Dy
Tomas Pfister
CLLVPVLMKELMVLM
114
801
0
16 Dec 2021
Previous
123...1011129
Next