Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2005.00247
Cited By
AdapterFusion: Non-Destructive Task Composition for Transfer Learning
1 May 2020
Jonas Pfeiffer
Aishwarya Kamath
Andreas Rucklé
Kyunghyun Cho
Iryna Gurevych
CLL
MoMe
Re-assign community
ArXiv
PDF
HTML
Papers citing
"AdapterFusion: Non-Destructive Task Composition for Transfer Learning"
50 / 530 papers shown
Title
Multi-Head Adapter Routing for Cross-Task Generalization
Lucas Caccia
Edoardo Ponti
Zhan Su
Matheus Pereira
Nicolas Le Roux
Alessandro Sordoni
21
20
0
07 Nov 2022
AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning
Yaqing Wang
Sahaj Agarwal
Subhabrata Mukherjee
Xiaodong Liu
Jing Gao
Ahmed Hassan Awadallah
Jianfeng Gao
MoE
22
118
0
31 Oct 2022
Effective Cross-Task Transfer Learning for Explainable Natural Language Inference with T5
Irina Bigoulaeva
Rachneet Sachdeva
Harish Tayyar Madabushi
Aline Villavicencio
Iryna Gurevych
LRM
53
5
0
31 Oct 2022
Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning
Yifan Chen
Devamanyu Hazarika
Mahdi Namazifar
Yang Liu
Di Jin
Dilek Z. Hakkani-Tür
29
3
0
26 Oct 2022
Evaluating Parameter Efficient Learning for Generation
Peng Xu
M. Patwary
Shrimai Prabhumoye
Virginia Adams
R. Prenger
Ming-Yu Liu
Nayeon Lee
M. Shoeybi
Bryan Catanzaro
MoE
35
3
0
25 Oct 2022
Adapters for Enhanced Modeling of Multilingual Knowledge and Text
Yifan Hou
Wenxiang Jiao
Mei-Jun Liu
Carl Allen
Zhaopeng Tu
Mrinmaya Sachan
34
11
0
24 Oct 2022
Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning
Jing Yi
Weize Chen
Yujia Qin
Yankai Lin
Ning Ding
Xu Han
Zhiyuan Liu
Maosong Sun
Jie Zhou
20
2
0
24 Oct 2022
Exploring The Landscape of Distributional Robustness for Question Answering Models
Anas Awadalla
Mitchell Wortsman
Gabriel Ilharco
Sewon Min
Ian H. Magnusson
Hannaneh Hajishirzi
Ludwig Schmidt
ELM
OOD
KELM
72
19
0
22 Oct 2022
Tiny-Attention Adapter: Contexts Are More Important Than the Number of Parameters
Hongyu Zhao
Hao Tan
Hongyuan Mei
MoE
39
16
0
18 Oct 2022
Using Bottleneck Adapters to Identify Cancer in Clinical Notes under Low-Resource Constraints
Omid Rohanian
Hannah Jauncey
Mohammadmahdi Nouriborji
Vinod Kumar Chauhan
Bronner P. Gonccalves
Christiana Kartsonaki
Isaric Clinical Characterisation Group
L. Merson
David A. Clifton
19
7
0
17 Oct 2022
BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation
Tianxiang Sun
Junliang He
Xipeng Qiu
Xuanjing Huang
24
44
0
14 Oct 2022
MiniALBERT: Model Distillation via Parameter-Efficient Recursive Transformers
Mohammadmahdi Nouriborji
Omid Rohanian
Samaneh Kouchaki
David A. Clifton
32
8
0
12 Oct 2022
Back to the Future: On Potential Histories in NLP
Zeerak Talat
Anne Lauscher
AI4TS
30
4
0
12 Oct 2022
Revisiting adapters with adversarial training
Sylvestre-Alvise Rebuffi
Francesco Croce
Sven Gowal
AAML
36
16
0
10 Oct 2022
SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters
Shwai He
Liang Ding
Daize Dong
Miao Zhang
Dacheng Tao
MoE
32
87
0
09 Oct 2022
Granularity-aware Adaptation for Image Retrieval over Multiple Tasks
Jon Almazán
ByungSoo Ko
Geonmo Gu
Diane Larlus
Yannis Kalantidis
ObjD
VLM
41
7
0
05 Oct 2022
Differentially Private Bias-Term Fine-tuning of Foundation Models
Zhiqi Bu
Yu-Xiang Wang
Sheng Zha
George Karypis
28
46
0
30 Sep 2022
ConceptNet infused DialoGPT for Underlying Commonsense Understanding and Reasoning in Dialogue Response Generation
Ye Liu
Wolfgang Maier
Wolfgang Minker
Stefan Ultes
49
2
0
29 Sep 2022
Towards Parameter-Efficient Integration of Pre-Trained Language Models In Temporal Video Grounding
Erica K. Shimomoto
Edison Marrese-Taylor
Hiroya Takamura
Ichiro Kobayashi
Hideki Nakayama
Yusuke Miyao
27
7
0
26 Sep 2022
An Empirical Study on Cross-X Transfer for Legal Judgment Prediction
Joel Niklaus
Matthias Sturmer
Ilias Chalkidis
ELM
AILaw
55
19
0
25 Sep 2022
WinoDict: Probing language models for in-context word acquisition
Julian Martin Eisenschlos
Jeremy R. Cole
Fangyu Liu
William W. Cohen
KELM
27
12
0
25 Sep 2022
Efficient Few-Shot Learning Without Prompts
Lewis Tunstall
Nils Reimers
Unso Eun Seo Jo
Luke Bates
Daniel Korat
Moshe Wasserblat
Oren Pereg
VLM
36
182
0
22 Sep 2022
Parameter-Efficient Finetuning for Robust Continual Multilingual Learning
Kartikeya Badola
Shachi Dave
Partha P. Talukdar
CLL
KELM
39
7
0
14 Sep 2022
Review of Natural Language Processing in Pharmacology
D. Trajanov
Vangel Trajkovski
Makedonka Dimitrieva
Jovana Dobreva
Milos Jovanovik
Matej Klemen
Alevs vZagar
Marko Robnik-vSikonja
LM&MA
36
7
0
22 Aug 2022
Visual Comparison of Language Model Adaptation
Rita Sevastjanova
E. Cakmak
Shauli Ravfogel
Ryan Cotterell
Mennatallah El-Assady
VLM
44
16
0
17 Aug 2022
Conv-Adapter: Exploring Parameter Efficient Transfer Learning for ConvNets
Hao Chen
R. Tao
Han Zhang
Yidong Wang
Xiang Li
Weirong Ye
Jindong Wang
Guosheng Hu
Marios Savvides
VPVLM
32
53
0
15 Aug 2022
Frozen CLIP Models are Efficient Video Learners
Ziyi Lin
Shijie Geng
Renrui Zhang
Peng Gao
Gerard de Melo
Xiaogang Wang
Jifeng Dai
Yu Qiao
Hongsheng Li
CLIP
VLM
16
200
0
06 Aug 2022
Pro-tuning: Unified Prompt Tuning for Vision Tasks
Xing Nie
Bolin Ni
Jianlong Chang
Gaomeng Meng
Chunlei Huo
Zhaoxiang Zhang
Shiming Xiang
Qi Tian
Chunhong Pan
AAML
VPVLM
VLM
34
70
0
28 Jul 2022
Contrastive Adapters for Foundation Model Group Robustness
Michael Zhang
Christopher Ré
VLM
18
62
0
14 Jul 2022
Convolutional Bypasses Are Better Vision Transformer Adapters
Shibo Jie
Zhi-Hong Deng
VPVLM
21
132
0
14 Jul 2022
FairDistillation: Mitigating Stereotyping in Language Models
Pieter Delobelle
Bettina Berendt
26
8
0
10 Jul 2022
Meta-Learning the Difference: Preparing Large Language Models for Efficient Adaptation
Zejiang Hou
Julian Salazar
George Polovets
30
14
0
07 Jul 2022
Continual Learning with Transformers for Image Classification
Beyza Ermis
Giovanni Zappella
Martin Wistuba
Aditya Rawal
Cédric Archambeau
CLL
43
21
0
28 Jun 2022
ST-Adapter: Parameter-Efficient Image-to-Video Transfer Learning
Junting Pan
Ziyi Lin
Xiatian Zhu
Jing Shao
Hongsheng Li
27
191
0
27 Jun 2022
GAAMA 2.0: An Integrated System that Answers Boolean and Extractive Questions
Scott McCarley
Mihaela A. Bornea
Sara Rosenthal
Anthony Ferritto
Md Arafat Sultan
Avirup Sil
Radu Florian
14
1
0
16 Jun 2022
Sparse Structure Search for Parameter-Efficient Tuning
Shengding Hu
Zhen Zhang
Ning Ding
Yadao Wang
Yasheng Wang
Zhiyuan Liu
Maosong Sun
37
16
0
15 Jun 2022
TransVG++: End-to-End Visual Grounding with Language Conditioned Vision Transformer
Jiajun Deng
Zhengyuan Yang
Daqing Liu
Tianlang Chen
Wen-gang Zhou
Yanyong Zhang
Houqiang Li
Wanli Ouyang
ViT
30
50
0
14 Jun 2022
Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning
Yu Jin Kim
Beong-woo Kwak
Youngwook Kim
Reinald Kim Amplayo
Seung-won Hwang
Jinyoung Yeo
LRM
27
12
0
08 Jun 2022
Modular and On-demand Bias Mitigation with Attribute-Removal Subnetworks
Lukas Hauzenberger
Shahed Masoudian
Deepak Kumar
Markus Schedl
Navid Rekabsaz
30
17
0
30 May 2022
AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition
Shoufa Chen
Chongjian Ge
Zhan Tong
Jiangliu Wang
Yibing Song
Jue Wang
Ping Luo
152
639
0
26 May 2022
Enhancing Continual Learning with Global Prototypes: Counteracting Negative Representation Drift
Xueying Bai
Jinghuan Shang
Yifan Sun
Niranjan Balasubramanian
CLL
35
1
0
24 May 2022
Hyper-X: A Unified Hypernetwork for Multi-Task Multilingual Transfer
Ahmet Üstün
Arianna Bisazza
G. Bouma
Gertjan van Noord
Sebastian Ruder
54
32
0
24 May 2022
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts
Akari Asai
Mohammadreza Salehi
Matthew E. Peters
Hannaneh Hajishirzi
130
100
0
24 May 2022
When does Parameter-Efficient Transfer Learning Work for Machine Translation?
Ahmet Üstün
Asa Cooper Stickland
37
7
0
23 May 2022
Stop Filtering: Multi-View Attribute-Enhanced Dialogue Learning
Yiwei Li
Bin Sun
Shaoxiong Feng
Kan Li
29
3
0
23 May 2022
FedAdapter: Efficient Federated Learning for Modern NLP
Dongqi Cai
Yaozong Wu
Shangguang Wang
F. Lin
Mengwei Xu
FedML
AI4CE
36
21
0
20 May 2022
Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Jonas Pfeiffer
Naman Goyal
Xi Lin
Xian Li
James Cross
Sebastian Riedel
Mikel Artetxe
LRM
40
139
0
12 May 2022
AdaVAE: Exploring Adaptive GPT-2s in Variational Auto-Encoders for Language Modeling
Haoqin Tu
Zhongliang Yang
Jinshuai Yang
Yong Huang
23
12
0
12 May 2022
AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
Chin-Lun Fu
Zih-Ching Chen
Yun-Ru Lee
Hung-yi Lee
33
44
0
30 Apr 2022
On The Cross-Modal Transfer from Natural Language to Code through Adapter Modules
Divyam Goel
Raman Grover
Fatemeh H. Fard
28
18
0
19 Apr 2022
Previous
1
2
3
...
10
11
8
9
Next