ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.11918
  4. Cited By
AdapterDrop: On the Efficiency of Adapters in Transformers

AdapterDrop: On the Efficiency of Adapters in Transformers

22 October 2020
Andreas Rucklé
Gregor Geigle
Max Glockner
Tilman Beck
Jonas Pfeiffer
Nils Reimers
Iryna Gurevych
ArXivPDFHTML

Papers citing "AdapterDrop: On the Efficiency of Adapters in Transformers"

50 / 73 papers shown
Title
RepCali: High Efficient Fine-tuning Via Representation Calibration in Latent Space for Pre-trained Language Models
RepCali: High Efficient Fine-tuning Via Representation Calibration in Latent Space for Pre-trained Language Models
Fujun Zhang
Xiangdong Su
34
0
0
13 May 2025
Saliency-Motion Guided Trunk-Collateral Network for Unsupervised Video Object Segmentation
Saliency-Motion Guided Trunk-Collateral Network for Unsupervised Video Object Segmentation
Xiangyu Zheng
Wanyun Li
Songcheng He
Jianping Fan
Xiaoqiang Li
We Zhang
VOS
35
0
0
08 Apr 2025
SSH: Sparse Spectrum Adaptation via Discrete Hartley Transformation
Yixian Shen
Qi Bi
Jia-Hong Huang
Hongyi Zhu
Andy D. Pimentel
Anuj Pathania
46
0
0
08 Feb 2025
LoCA: Location-Aware Cosine Adaptation for Parameter-Efficient Fine-Tuning
LoCA: Location-Aware Cosine Adaptation for Parameter-Efficient Fine-Tuning
Zhekai Du
Yinjie Min
Jingjing Li
Ke Lu
Changliang Zou
Liuhua Peng
Tingjin Chu
Mingming Gong
177
1
0
05 Feb 2025
PARA: Parameter-Efficient Fine-tuning with Prompt Aware Representation Adjustment
PARA: Parameter-Efficient Fine-tuning with Prompt Aware Representation Adjustment
Zequan Liu
Yi Zhao
Ming Tan
Wei Zhu
Aaron Xuxiang Tian
39
0
0
03 Feb 2025
BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models
BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models
Yibin Wang
Haizhou Shi
Ligong Han
Dimitris N. Metaxas
Hao Wang
BDL
UQLM
116
7
0
28 Jan 2025
KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models
KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models
Fan Wang
Juyong Jiang
Chansung Park
Sunghun Kim
Jing Tang
94
1
0
08 Dec 2024
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
Liwen Wang
Sheng Chen
Linnan Jiang
Shu Pan
Runze Cai
Sen Yang
Fei Yang
49
3
0
24 Oct 2024
LoRTA: Low Rank Tensor Adaptation of Large Language Models
LoRTA: Low Rank Tensor Adaptation of Large Language Models
Ignacio Hounie
Charilaos I. Kanatsoulis
Arnuv Tandon
Alejandro Ribeiro
36
0
0
05 Oct 2024
Circuit Compositions: Exploring Modular Structures in Transformer-Based Language Models
Circuit Compositions: Exploring Modular Structures in Transformer-Based Language Models
Philipp Mondorf
Sondre Wold
Barbara Plank
39
0
0
02 Oct 2024
Compensate Quantization Errors+: Quantized Models Are Inquisitive Learners
Compensate Quantization Errors+: Quantized Models Are Inquisitive Learners
Yifei Gao
Jie Ou
Lei Wang
Fanhua Shang
Jaji Wu
MQ
47
0
0
22 Jul 2024
Low-Rank Interconnected Adaptation Across Layers
Low-Rank Interconnected Adaptation Across Layers
Yibo Zhong
Yao Zhou
OffRL
MoE
48
1
0
13 Jul 2024
ShareLoRA: Parameter Efficient and Robust Large Language Model Fine-tuning via Shared Low-Rank Adaptation
ShareLoRA: Parameter Efficient and Robust Large Language Model Fine-tuning via Shared Low-Rank Adaptation
Yurun Song
Junchen Zhao
Ian G. Harris
Sangeetha Abdu Jyothi
32
3
0
16 Jun 2024
Low-Rank Adaptation of Time Series Foundational Models for Out-of-Domain
  Modality Forecasting
Low-Rank Adaptation of Time Series Foundational Models for Out-of-Domain Modality Forecasting
Divij Gupta
Anubhav Bhatti
Surajsinh Parmar
Chen Dan
Yuwei Liu
Bingjie Shen
San Lee
AI4TS
37
2
0
16 May 2024
The Trade-off between Performance, Efficiency, and Fairness in Adapter
  Modules for Text Classification
The Trade-off between Performance, Efficiency, and Fairness in Adapter Modules for Text Classification
Minh Duc Bui
K. Wense
36
0
0
03 May 2024
AdapterSwap: Continuous Training of LLMs with Data Removal and Access-Control Guarantees
AdapterSwap: Continuous Training of LLMs with Data Removal and Access-Control Guarantees
William Fleshman
Aleem Khan
Marc Marone
Benjamin Van Durme
CLL
KELM
58
3
0
12 Apr 2024
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models
Fanxu Meng
Zhaohui Wang
Muhan Zhang
VLM
64
73
0
03 Apr 2024
Self-Expansion of Pre-trained Models with Mixture of Adapters for Continual Learning
Self-Expansion of Pre-trained Models with Mixture of Adapters for Continual Learning
Huiyi Wang
Haodong Lu
Lina Yao
Dong Gong
KELM
CLL
48
8
0
27 Mar 2024
Introducing Routing Functions to Vision-Language Parameter-Efficient
  Fine-Tuning with Low-Rank Bottlenecks
Introducing Routing Functions to Vision-Language Parameter-Efficient Fine-Tuning with Low-Rank Bottlenecks
Tingyu Qu
Tinne Tuytelaars
Marie-Francine Moens
MoE
46
2
0
14 Mar 2024
Capturing Pertinent Symbolic Features for Enhanced Content-Based
  Misinformation Detection
Capturing Pertinent Symbolic Features for Enhanced Content-Based Misinformation Detection
Flavio Merenda
José Manuél Gómez-Pérez
27
0
0
29 Jan 2024
TransMed: Large Language Models Enhance Vision Transformer for
  Biomedical Image Classification
TransMed: Large Language Models Enhance Vision Transformer for Biomedical Image Classification
Kaipeng Zheng
Weiran Huang
Lichao Sun
LM&MA
MedIm
VLM
31
0
0
12 Dec 2023
Tied-Lora: Enhancing parameter efficiency of LoRA with weight tying
Tied-Lora: Enhancing parameter efficiency of LoRA with weight tying
Adithya Renduchintala
Tugrul Konuk
Oleksii Kuchaiev
MoMe
26
41
0
16 Nov 2023
Scalable Neural Network Kernels
Scalable Neural Network Kernels
Arijit Sehanobish
Krzysztof Choromanski
Yunfan Zhao
Kumar Avinava Dubey
Valerii Likhosherstov
41
4
0
20 Oct 2023
Decomposed Prompt Tuning via Low-Rank Reparameterization
Decomposed Prompt Tuning via Low-Rank Reparameterization
Yao Xiao
Lu Xu
Jiaxi Li
Wei Lu
Xiaoli Li
VLM
25
6
0
16 Oct 2023
NOLA: Compressing LoRA using Linear Combination of Random Basis
NOLA: Compressing LoRA using Linear Combination of Random Basis
Soroush Abbasi Koohpayegani
K. Navaneet
Parsa Nooralinejad
Soheil Kolouri
Hamed Pirsiavash
40
12
0
04 Oct 2023
Measuring Catastrophic Forgetting in Cross-Lingual Transfer Paradigms: Exploring Tuning Strategies
Measuring Catastrophic Forgetting in Cross-Lingual Transfer Paradigms: Exploring Tuning Strategies
Boshko Koloski
Blaž Škrlj
Marko Robnik-Šikonja
Senja Pollak
CLL
24
2
0
12 Sep 2023
How to Protect Copyright Data in Optimization of Large Language Models?
How to Protect Copyright Data in Optimization of Large Language Models?
T. Chu
Zhao Song
Chiwun Yang
40
29
0
23 Aug 2023
Low-Parameter Federated Learning with Large Language Models
Low-Parameter Federated Learning with Large Language Models
Jing Jiang
Xiangyang Liu
Chenyou Fan
26
24
0
26 Jul 2023
FedYolo: Augmenting Federated Learning with Pretrained Transformers
FedYolo: Augmenting Federated Learning with Pretrained Transformers
Xuechen Zhang
Mingchen Li
Xiangyu Chang
Jiasi Chen
A. Roy-Chowdhury
A. Suresh
Samet Oymak
FedML
31
7
0
10 Jul 2023
OpenDelta: A Plug-and-play Library for Parameter-efficient Adaptation of
  Pre-trained Models
OpenDelta: A Plug-and-play Library for Parameter-efficient Adaptation of Pre-trained Models
Shengding Hu
Ning Ding
Weilin Zhao
Xingtai Lv
Zhen Zhang
Zhiyuan Liu
Maosong Sun
37
7
0
05 Jul 2023
Parameter-efficient is not sufficient: Exploring Parameter, Memory, and
  Time Efficient Adapter Tuning for Dense Predictions
Parameter-efficient is not sufficient: Exploring Parameter, Memory, and Time Efficient Adapter Tuning for Dense Predictions
Dongshuo Yin
Xueting Han
Bin Li
Hao Feng
Jinghua Bai
VPVLM
34
17
0
16 Jun 2023
Adaptive Fake Audio Detection with Low-Rank Model Squeezing
Adaptive Fake Audio Detection with Low-Rank Model Squeezing
Xiaohui Zhang
Jiangyan Yi
J. Tao
Chenlong Wang
Le Xu
Ruibo Fu
33
6
0
08 Jun 2023
Jointly Reparametrized Multi-Layer Adaptation for Efficient and Private
  Tuning
Jointly Reparametrized Multi-Layer Adaptation for Efficient and Private Tuning
Umang Gupta
Aram Galstyan
Greg Ver Steeg
6
2
0
30 May 2023
Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model
  Fine-tuning
Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model Fine-tuning
Zhen-Ru Zhang
Chuanqi Tan
Haiyang Xu
Chengyu Wang
Jun Huang
Songfang Huang
27
29
0
24 May 2023
A Comprehensive Analysis of Adapter Efficiency
A Comprehensive Analysis of Adapter Efficiency
Nandini Mundra
Sumanth Doddapaneni
Raj Dabre
Anoop Kunchukuttan
Ratish Puduppully
Mitesh M. Khapra
23
10
0
12 May 2023
Residual Prompt Tuning: Improving Prompt Tuning with Residual
  Reparameterization
Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization
Anastasia Razdaibiedina
Yuning Mao
Rui Hou
Madian Khabsa
M. Lewis
Jimmy Ba
Amjad Almahairi
VLM
27
42
0
06 May 2023
PEFT-Ref: A Modular Reference Architecture and Typology for
  Parameter-Efficient Finetuning Techniques
PEFT-Ref: A Modular Reference Architecture and Typology for Parameter-Efficient Finetuning Techniques
Mohammed Sabry
Anya Belz
38
8
0
24 Apr 2023
Parameter-Efficient Sparse Retrievers and Rerankers using Adapters
Parameter-Efficient Sparse Retrievers and Rerankers using Adapters
Vaishali Pal
Carlos Lassance
Hervé Déjean
S. Clinchant
135
3
0
23 Mar 2023
Modular Deep Learning
Modular Deep Learning
Jonas Pfeiffer
Sebastian Ruder
Ivan Vulić
E. Ponti
MoMe
OOD
32
73
0
22 Feb 2023
An Empirical Study on the Transferability of Transformer Modules in
  Parameter-Efficient Fine-Tuning
An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning
Mohammad AkbarTajari
S. Rajaee
Mohammad Taher Pilehvar
11
2
0
01 Feb 2023
AutoPEFT: Automatic Configuration Search for Parameter-Efficient
  Fine-Tuning
AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning
Han Zhou
Xingchen Wan
Ivan Vulić
Anna Korhonen
21
45
0
28 Jan 2023
BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting
BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting
Zheng-Xin Yong
Hailey Schoelkopf
Niklas Muennighoff
Alham Fikri Aji
David Ifeoluwa Adelani
...
Genta Indra Winata
Stella Biderman
Edward Raff
Dragomir R. Radev
Vassilina Nikoulina
CLL
VLM
AI4CE
LRM
35
81
0
19 Dec 2022
SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers
SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers
Ameet Deshpande
Md Arafat Sultan
Anthony Ferritto
Ashwin Kalyan
Karthik Narasimhan
Avirup Sil
MoE
33
1
0
29 Nov 2022
Scaling Native Language Identification with Transformer Adapters
Scaling Native Language Identification with Transformer Adapters
Ahmet Uluslu
Gerold Schneider
27
6
0
18 Nov 2022
Effective Cross-Task Transfer Learning for Explainable Natural Language
  Inference with T5
Effective Cross-Task Transfer Learning for Explainable Natural Language Inference with T5
Irina Bigoulaeva
Rachneet Sachdeva
Harish Tayyar Madabushi
Aline Villavicencio
Iryna Gurevych
LRM
53
5
0
31 Oct 2022
$m^4Adapter$: Multilingual Multi-Domain Adaptation for Machine
  Translation with a Meta-Adapter
m4Adapterm^4Adapterm4Adapter: Multilingual Multi-Domain Adaptation for Machine Translation with a Meta-Adapter
Wen Lai
Alexandra Chronopoulou
Alexander Fraser
24
3
0
21 Oct 2022
Late Prompt Tuning: A Late Prompt Could Be Better Than Many Prompts
Late Prompt Tuning: A Late Prompt Could Be Better Than Many Prompts
Xiangyang Liu
Tianxiang Sun
Xuanjing Huang
Xipeng Qiu
VLM
36
27
0
20 Oct 2022
Incorporating Relevance Feedback for Information-Seeking Retrieval using
  Few-Shot Document Re-Ranking
Incorporating Relevance Feedback for Information-Seeking Retrieval using Few-Shot Document Re-Ranking
Tim Baumgärtner
Leonardo F. R. Ribeiro
Nils Reimers
Iryna Gurevych
32
6
0
19 Oct 2022
Tiny-Attention Adapter: Contexts Are More Important Than the Number of
  Parameters
Tiny-Attention Adapter: Contexts Are More Important Than the Number of Parameters
Hongyu Zhao
Hao Tan
Hongyuan Mei
MoE
36
16
0
18 Oct 2022
Using Bottleneck Adapters to Identify Cancer in Clinical Notes under
  Low-Resource Constraints
Using Bottleneck Adapters to Identify Cancer in Clinical Notes under Low-Resource Constraints
Omid Rohanian
Hannah Jauncey
Mohammadmahdi Nouriborji
Vinod Kumar Chauhan
Bronner P. Gonccalves
Christiana Kartsonaki
Isaric Clinical Characterisation Group
L. Merson
David A. Clifton
16
7
0
17 Oct 2022
12
Next