ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.00247
  4. Cited By
AdapterFusion: Non-Destructive Task Composition for Transfer Learning

AdapterFusion: Non-Destructive Task Composition for Transfer Learning

1 May 2020
Jonas Pfeiffer
Aishwarya Kamath
Andreas Rucklé
Kyunghyun Cho
Iryna Gurevych
    CLL
    MoMe
ArXivPDFHTML

Papers citing "AdapterFusion: Non-Destructive Task Composition for Transfer Learning"

30 / 530 papers shown
Title
An Adapter Based Pre-Training for Efficient and Scalable Self-Supervised
  Speech Representation Learning
An Adapter Based Pre-Training for Efficient and Scalable Self-Supervised Speech Representation Learning
Samuel Kessler
Bethan Thomas
S. Karout
SSL
27
29
0
26 Jul 2021
LoRA: Low-Rank Adaptation of Large Language Models
LoRA: Low-Rank Adaptation of Large Language Models
J. E. Hu
Yelong Shen
Phillip Wallis
Zeyuan Allen-Zhu
Yuanzhi Li
Shean Wang
Lu Wang
Weizhu Chen
OffRL
AI4TS
AI4CE
ALM
AIMat
65
9,615
0
17 Jun 2021
Specializing Multilingual Language Models: An Empirical Study
Specializing Multilingual Language Models: An Empirical Study
Ethan C. Chau
Noah A. Smith
27
27
0
16 Jun 2021
Neural Supervised Domain Adaptation by Augmenting Pre-trained Models
  with Random Units
Neural Supervised Domain Adaptation by Augmenting Pre-trained Models with Random Units
Sara Meftah
N. Semmar
Y. Tamaazousti
H. Essafi
F. Sadat
20
3
0
09 Jun 2021
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Rabeeh Karimi Mahabadi
James Henderson
Sebastian Ruder
MoE
67
468
0
08 Jun 2021
On the Effectiveness of Adapter-based Tuning for Pretrained Language
  Model Adaptation
On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
Ruidan He
Linlin Liu
Hai Ye
Qingyu Tan
Bosheng Ding
Liying Cheng
Jia-Wei Low
Lidong Bing
Luo Si
34
197
0
06 Jun 2021
Lightweight Adapter Tuning for Multilingual Speech Translation
Lightweight Adapter Tuning for Multilingual Speech Translation
Hang Le
J. Pino
Changhan Wang
Jiatao Gu
D. Schwab
Laurent Besacier
21
88
0
02 Jun 2021
Exploiting Adapters for Cross-lingual Low-resource Speech Recognition
Exploiting Adapters for Cross-lingual Low-resource Speech Recognition
Wenxin Hou
Hanlin Zhu
Yidong Wang
Jindong Wang
Tao Qin
Renjun Xu
T. Shinozaki
35
63
0
18 May 2021
MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to
  Limited Data Domains
MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to Limited Data Domains
Yaxing Wang
Abel Gonzalez-Garcia
Chenshen Wu
Luis Herranz
Fahad Shahbaz Khan
Shangling Jui
Joost van de Weijer
32
6
0
28 Apr 2021
XLM-T: Multilingual Language Models in Twitter for Sentiment Analysis
  and Beyond
XLM-T: Multilingual Language Models in Twitter for Sentiment Analysis and Beyond
Francesco Barbieri
Luis Espinosa Anke
Jose Camacho-Collados
90
213
0
25 Apr 2021
What to Pre-Train on? Efficient Intermediate Task Selection
What to Pre-Train on? Efficient Intermediate Task Selection
Clifton A. Poth
Jonas Pfeiffer
Andreas Rucklé
Iryna Gurevych
21
94
0
16 Apr 2021
MetaXL: Meta Representation Transformation for Low-resource
  Cross-lingual Learning
MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning
Mengzhou Xia
Guoqing Zheng
Subhabrata Mukherjee
Milad Shokouhi
Graham Neubig
Ahmed Hassan Awadallah
28
31
0
16 Apr 2021
Continual Learning for Text Classification with Information
  Disentanglement Based Regularization
Continual Learning for Text Classification with Information Disentanglement Based Regularization
Yufan Huang
Yanzhe Zhang
Jiaao Chen
Xuezhi Wang
Diyi Yang
CLL
26
106
0
12 Apr 2021
Structural Adapters in Pretrained Language Models for AMR-to-text
  Generation
Structural Adapters in Pretrained Language Models for AMR-to-text Generation
Leonardo F. R. Ribeiro
Yue Zhang
Iryna Gurevych
41
69
0
16 Mar 2021
Adapting MARBERT for Improved Arabic Dialect Identification: Submission
  to the NADI 2021 Shared Task
Adapting MARBERT for Improved Arabic Dialect Identification: Submission to the NADI 2021 Shared Task
Badr AlKhamissi
Mohamed Gabr
Muhammad N. ElNokrashy
Khaled Essam
18
17
0
01 Mar 2021
Trankit: A Light-Weight Transformer-based Toolkit for Multilingual
  Natural Language Processing
Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing
Minh Nguyen
Viet Dac Lai
Amir Pouran Ben Veyseh
Thien Huu Nguyen
52
132
0
09 Jan 2021
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Xiang Lisa Li
Percy Liang
20
4,098
0
01 Jan 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
254
342
0
01 Jan 2021
How Good is Your Tokenizer? On the Monolingual Performance of
  Multilingual Language Models
How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models
Phillip Rust
Jonas Pfeiffer
Ivan Vulić
Sebastian Ruder
Iryna Gurevych
80
235
0
31 Dec 2020
UNKs Everywhere: Adapting Multilingual Language Models to New Scripts
UNKs Everywhere: Adapting Multilingual Language Models to New Scripts
Jonas Pfeiffer
Ivan Vulić
Iryna Gurevych
Sebastian Ruder
24
126
0
31 Dec 2020
Verb Knowledge Injection for Multilingual Event Processing
Verb Knowledge Injection for Multilingual Event Processing
Olga Majewska
Ivan Vulić
Goran Glavavs
Edoardo Ponti
Anna Korhonen
32
11
0
31 Dec 2020
Parameter-Efficient Transfer Learning with Diff Pruning
Parameter-Efficient Transfer Learning with Diff Pruning
Demi Guo
Alexander M. Rush
Yoon Kim
13
385
0
14 Dec 2020
Orthogonal Language and Task Adapters in Zero-Shot Cross-Lingual
  Transfer
Orthogonal Language and Task Adapters in Zero-Shot Cross-Lingual Transfer
M. Vidoni
Ivan Vulić
Goran Glavas
33
27
0
11 Dec 2020
AdapterDrop: On the Efficiency of Adapters in Transformers
AdapterDrop: On the Efficiency of Adapters in Transformers
Andreas Rucklé
Gregor Geigle
Max Glockner
Tilman Beck
Jonas Pfeiffer
Nils Reimers
Iryna Gurevych
57
255
0
22 Oct 2020
MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on
  a Massive Scale
MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on a Massive Scale
Andreas Rucklé
Jonas Pfeiffer
Iryna Gurevych
27
37
0
02 Oct 2020
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning
  in NLP Using Fewer Parameters & Less Data
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
Jonathan Pilault
Amine Elhattami
C. Pal
CLL
MoE
30
89
0
19 Sep 2020
Low Resource Multi-Task Sequence Tagging -- Revisiting Dynamic
  Conditional Random Fields
Low Resource Multi-Task Sequence Tagging -- Revisiting Dynamic Conditional Random Fields
Jonas Pfeiffer
Edwin Simpson
Iryna Gurevych
16
5
0
01 May 2020
MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer
MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer
Jonas Pfeiffer
Ivan Vulić
Iryna Gurevych
Sebastian Ruder
42
610
0
30 Apr 2020
Survey on Publicly Available Sinhala Natural Language Processing Tools
  and Research
Survey on Publicly Available Sinhala Natural Language Processing Tools and Research
Nisansa de Silva
27
43
0
05 Jun 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,984
0
20 Apr 2018
Previous
123...10119