ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.11918
  4. Cited By
AdapterDrop: On the Efficiency of Adapters in Transformers

AdapterDrop: On the Efficiency of Adapters in Transformers

22 October 2020
Andreas Rucklé
Gregor Geigle
Max Glockner
Tilman Beck
Jonas Pfeiffer
Nils Reimers
Iryna Gurevych
ArXivPDFHTML

Papers citing "AdapterDrop: On the Efficiency of Adapters in Transformers"

23 / 73 papers shown
Title
MiniALBERT: Model Distillation via Parameter-Efficient Recursive
  Transformers
MiniALBERT: Model Distillation via Parameter-Efficient Recursive Transformers
Mohammadmahdi Nouriborji
Omid Rohanian
Samaneh Kouchaki
David A. Clifton
32
8
0
12 Oct 2022
Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation
Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation
Xu Guo
Boyang Albert Li
Han Yu
VLM
39
22
0
06 Oct 2022
Towards Parameter-Efficient Integration of Pre-Trained Language Models
  In Temporal Video Grounding
Towards Parameter-Efficient Integration of Pre-Trained Language Models In Temporal Video Grounding
Erica K. Shimomoto
Edison Marrese-Taylor
Hiroya Takamura
Ichiro Kobayashi
Hideki Nakayama
Yusuke Miyao
27
7
0
26 Sep 2022
Efficient Methods for Natural Language Processing: A Survey
Efficient Methods for Natural Language Processing: A Survey
Marcos Vinícius Treviso
Ji-Ung Lee
Tianchu Ji
Betty van Aken
Qingqing Cao
...
Emma Strubell
Niranjan Balasubramanian
Leon Derczynski
Iryna Gurevych
Roy Schwartz
30
109
0
31 Aug 2022
Sparse Structure Search for Parameter-Efficient Tuning
Sparse Structure Search for Parameter-Efficient Tuning
Shengding Hu
Zhen Zhang
Ning Ding
Yadao Wang
Yasheng Wang
Zhiyuan Liu
Maosong Sun
37
16
0
15 Jun 2022
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures
  of Soft Prompts
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts
Akari Asai
Mohammadreza Salehi
Matthew E. Peters
Hannaneh Hajishirzi
130
100
0
24 May 2022
Lifting the Curse of Multilinguality by Pre-training Modular
  Transformers
Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Jonas Pfeiffer
Naman Goyal
Xi Lin
Xian Li
James Cross
Sebastian Riedel
Mikel Artetxe
LRM
40
139
0
12 May 2022
Training Mixed-Domain Translation Models via Federated Learning
Training Mixed-Domain Translation Models via Federated Learning
Peyman Passban
Tanya Roosta
Rahul Gupta
Ankit R. Chadha
Clement Chung
FedML
AI4CE
26
18
0
03 May 2022
Adaptable Adapters
Adaptable Adapters
N. Moosavi
Quentin Delfosse
Kristian Kersting
Iryna Gurevych
50
21
0
03 May 2022
AdapterBias: Parameter-efficient Token-dependent Representation Shift
  for Adapters in NLP Tasks
AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
Chin-Lun Fu
Zih-Ching Chen
Yun-Ru Lee
Hung-yi Lee
33
44
0
30 Apr 2022
Adapting BigScience Multilingual Model to Unseen Languages
Adapting BigScience Multilingual Model to Unseen Languages
Zheng-Xin Yong
Vassilina Nikoulina
24
5
0
11 Apr 2022
Parameter-Efficient Abstractive Question Answering over Tables or Text
Parameter-Efficient Abstractive Question Answering over Tables or Text
Vaishali Pal
Evangelos Kanoulas
Maarten de Rijke
LMTD
19
14
0
07 Apr 2022
Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual
  Retrieval
Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval
Robert Litschko
Ivan Vulić
Goran Glavavs
LRM
31
13
0
05 Apr 2022
Parameter-efficient Model Adaptation for Vision Transformers
Parameter-efficient Model Adaptation for Vision Transformers
Xuehai He
Chunyuan Li
Pengchuan Zhang
Jianwei Yang
Qing Guo
30
84
0
29 Mar 2022
Continual Sequence Generation with Adaptive Compositional Modules
Continual Sequence Generation with Adaptive Compositional Modules
Yanzhe Zhang
Xuezhi Wang
Diyi Yang
KELM
CLL
43
41
0
20 Mar 2022
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for
  Pre-trained Language Models
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Ning Ding
Yujia Qin
Guang Yang
Fu Wei
Zonghan Yang
...
Jianfei Chen
Yang Liu
Jie Tang
Juan Li
Maosong Sun
32
196
0
14 Mar 2022
Communication-Efficient Federated Learning for Neural Machine
  Translation
Communication-Efficient Federated Learning for Neural Machine Translation
Tanya Roosta
Peyman Passban
Ankit R. Chadha
FedML
AI4CE
19
5
0
12 Dec 2021
Differentially Private Fine-tuning of Language Models
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
134
347
0
13 Oct 2021
xGQA: Cross-Lingual Visual Question Answering
xGQA: Cross-Lingual Visual Question Answering
Jonas Pfeiffer
Gregor Geigle
Aishwarya Kamath
Jan-Martin O. Steitz
Stefan Roth
Ivan Vulić
Iryna Gurevych
37
56
0
13 Sep 2021
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Rabeeh Karimi Mahabadi
James Henderson
Sebastian Ruder
MoE
67
468
0
08 Jun 2021
BinaryBERT: Pushing the Limit of BERT Quantization
BinaryBERT: Pushing the Limit of BERT Quantization
Haoli Bai
Wei Zhang
Lu Hou
Lifeng Shang
Jing Jin
Xin Jiang
Qun Liu
Michael Lyu
Irwin King
MQ
142
221
0
31 Dec 2020
Orthogonal Language and Task Adapters in Zero-Shot Cross-Lingual
  Transfer
Orthogonal Language and Task Adapters in Zero-Shot Cross-Lingual Transfer
M. Vidoni
Ivan Vulić
Goran Glavas
33
27
0
11 Dec 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,984
0
20 Apr 2018
Previous
12