ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.04366
  4. Cited By
Towards a Unified View of Parameter-Efficient Transfer Learning

Towards a Unified View of Parameter-Efficient Transfer Learning

8 October 2021
Junxian He
Chunting Zhou
Xuezhe Ma
Taylor Berg-Kirkpatrick
Graham Neubig
    AAML
ArXivPDFHTML

Papers citing "Towards a Unified View of Parameter-Efficient Transfer Learning"

50 / 633 papers shown
Title
On Task Performance and Model Calibration with Supervised and
  Self-Ensembled In-Context Learning
On Task Performance and Model Calibration with Supervised and Self-Ensembled In-Context Learning
Chengzu Li
Han Zhou
Goran Glavavs
Anna Korhonen
Ivan Vulić
21
11
0
21 Dec 2023
LlaMaVAE: Guiding Large Language Model Generation via Continuous Latent
  Sentence Spaces
LlaMaVAE: Guiding Large Language Model Generation via Continuous Latent Sentence Spaces
Yingji Zhang
Danilo S. Carvalho
Ian Pratt-Hartmann
André Freitas
VLM
27
2
0
20 Dec 2023
PPEA-Depth: Progressive Parameter-Efficient Adaptation for
  Self-Supervised Monocular Depth Estimation
PPEA-Depth: Progressive Parameter-Efficient Adaptation for Self-Supervised Monocular Depth Estimation
Yue-Jiang Dong
Yuanchen Guo
Ying-Tian Liu
Fang-Lue Zhang
Song-Hai Zhang
MDE
43
4
0
20 Dec 2023
Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models:
  A Critical Review and Assessment
Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models: A Critical Review and Assessment
Lingling Xu
Haoran Xie
S. J. Qin
Xiaohui Tao
F. Wang
52
135
0
19 Dec 2023
Tuning LayerNorm in Attention: Towards Efficient Multi-Modal LLM
  Finetuning
Tuning LayerNorm in Attention: Towards Efficient Multi-Modal LLM Finetuning
Bingchen Zhao
Haoqin Tu
Chen Wei
Jieru Mei
Cihang Xie
20
32
0
18 Dec 2023
SCEdit: Efficient and Controllable Image Diffusion Generation via Skip
  Connection Editing
SCEdit: Efficient and Controllable Image Diffusion Generation via Skip Connection Editing
Zeyinzi Jiang
Chaojie Mao
Yulin Pan
Zhen Han
Jingfeng Zhang
32
28
0
18 Dec 2023
kNN-ICL: Compositional Task-Oriented Parsing Generalization with Nearest
  Neighbor In-Context Learning
kNN-ICL: Compositional Task-Oriented Parsing Generalization with Nearest Neighbor In-Context Learning
Wenting Zhao
Ye Liu
Yao Wan
Yibo Wang
Qingyang Wu
Zhongfen Deng
Jiangshu Du
Shuaiqi Liu
Yunlong Xu
Philip S. Yu
49
7
0
17 Dec 2023
Gradient-based Parameter Selection for Efficient Fine-Tuning
Gradient-based Parameter Selection for Efficient Fine-Tuning
Zhi Zhang
Qizhe Zhang
Zijun Gao
Renrui Zhang
Ekaterina Shutova
Shiji Zhou
Shanghang Zhang
33
15
0
15 Dec 2023
LoRAMoE: Alleviate World Knowledge Forgetting in Large Language Models
  via MoE-Style Plugin
LoRAMoE: Alleviate World Knowledge Forgetting in Large Language Models via MoE-Style Plugin
Shihan Dou
Enyu Zhou
Yan Liu
Songyang Gao
Jun Zhao
...
Jiang Zhu
Rui Zheng
Tao Gui
Qi Zhang
Xuanjing Huang
CLL
MoE
KELM
22
29
0
15 Dec 2023
VMT-Adapter: Parameter-Efficient Transfer Learning for Multi-Task Dense
  Scene Understanding
VMT-Adapter: Parameter-Efficient Transfer Learning for Multi-Task Dense Scene Understanding
Yi Xin
Junlong Du
Qiang Wang
Zhiwen Lin
Ke Yan
VPVLM
89
49
0
14 Dec 2023
Traffic Signal Control Using Lightweight Transformers: An
  Offline-to-Online RL Approach
Traffic Signal Control Using Lightweight Transformers: An Offline-to-Online RL Approach
Xingshuai Huang
Di Wu
Benoit Boulet
OffRL
24
2
0
12 Dec 2023
Batched Low-Rank Adaptation of Foundation Models
Batched Low-Rank Adaptation of Foundation Models
Yeming Wen
Swarat Chaudhuri
OffRL
23
19
0
09 Dec 2023
From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial
  Expression Recognition in Videos
From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos
Yin Chen
Jia Li
Shiguang Shan
Meng Wang
Richang Hong
46
32
0
09 Dec 2023
Adapting Vision Transformer for Efficient Change Detection
Adapting Vision Transformer for Efficient Change Detection
Yang Zhao
Yuxiang Zhang
Yanni Dong
Bo Du
VLM
48
2
0
08 Dec 2023
Creative Agents: Empowering Agents with Imagination for Creative Tasks
Creative Agents: Empowering Agents with Imagination for Creative Tasks
Chi Zhang
Penglin Cai
Yuhui Fu
Haoqi Yuan
Zongqing Lu
LM&Ro
LLMAG
51
20
0
05 Dec 2023
Regressor-Segmenter Mutual Prompt Learning for Crowd Counting
Regressor-Segmenter Mutual Prompt Learning for Crowd Counting
Mingyue Guo
Li Yuan
Zhaoyi Yan
Binghui Chen
Yaowei Wang
QiXiang Ye
38
4
0
04 Dec 2023
D$^2$ST-Adapter: Disentangled-and-Deformable Spatio-Temporal Adapter for
  Few-shot Action Recognition
D2^22ST-Adapter: Disentangled-and-Deformable Spatio-Temporal Adapter for Few-shot Action Recognition
Wenjie Pei
Qizhong Tan
Guangming Lu
Jiandong Tian
41
3
0
03 Dec 2023
Hyperparameter Optimization for Large Language Model Instruction-Tuning
Hyperparameter Optimization for Large Language Model Instruction-Tuning
C. Tribes
Sacha Benarroch-Lelong
Peng Lu
I. Kobyzev
29
12
0
01 Dec 2023
A Bayesian approach for prompt optimization in pre-trained language
  models
A Bayesian approach for prompt optimization in pre-trained language models
Antonio Sabbatella
Andrea Ponti
Antonio Candelieri
I. Giordani
Francesco Archetti
34
1
0
01 Dec 2023
Grounding Foundation Models through Federated Transfer Learning: A
  General Framework
Grounding Foundation Models through Federated Transfer Learning: A General Framework
Yan Kang
Tao Fan
Hanlin Gu
Xiaojin Zhang
Lixin Fan
Qiang Yang
AI4CE
68
19
0
29 Nov 2023
SPIN: Sparsifying and Integrating Internal Neurons in Large Language
  Models for Text Classification
SPIN: Sparsifying and Integrating Internal Neurons in Large Language Models for Text Classification
Difan Jiao
Yilun Liu
Zhenwei Tang
Daniel Matter
Jürgen Pfeffer
Ashton Anderson
17
1
0
27 Nov 2023
One Fits All: Universal Time Series Analysis by Pretrained LM and
  Specially Designed Adaptors
One Fits All: Universal Time Series Analysis by Pretrained LM and Specially Designed Adaptors
Tian Zhou
Peisong Niu
Xue Wang
Liang Sun
Rong Jin
AI4TS
68
2
0
24 Nov 2023
Sparse Low-rank Adaptation of Pre-trained Language Models
Sparse Low-rank Adaptation of Pre-trained Language Models
Ning Ding
Xingtai Lv
Qiaosen Wang
Yulin Chen
Bowen Zhou
Zhiyuan Liu
Maosong Sun
27
55
0
20 Nov 2023
Adapters: A Unified Library for Parameter-Efficient and Modular Transfer
  Learning
Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning
Clifton A. Poth
Hannah Sterz
Indraneil Paul
Sukannya Purkayastha
Leon Arne Engländer
Timo Imhof
Ivan Vulić
Sebastian Ruder
Iryna Gurevych
Jonas Pfeiffer
32
45
0
18 Nov 2023
When does In-context Learning Fall Short and Why? A Study on
  Specification-Heavy Tasks
When does In-context Learning Fall Short and Why? A Study on Specification-Heavy Tasks
Hao Peng
Xiaozhi Wang
Jianhui Chen
Weikai Li
Y. Qi
...
Zhili Wu
Kaisheng Zeng
Bin Xu
Lei Hou
Juanzi Li
34
28
0
15 Nov 2023
PEMA: An Offsite-Tunable Plug-in External Memory Adaptation for Language
  Models
PEMA: An Offsite-Tunable Plug-in External Memory Adaptation for Language Models
HyunJin Kim
Young Jin Kim
Jinyeong Bak
KELM
14
1
0
14 Nov 2023
Low-Rank Adaptation for Multilingual Summarization: An Empirical Study
Low-Rank Adaptation for Multilingual Summarization: An Empirical Study
Chenxi Whitehouse
Fantine Huot
Jasmijn Bastings
Mostafa Dehghani
Chu-Cheng Lin
Mirella Lapata
19
6
0
14 Nov 2023
Tunable Soft Prompts are Messengers in Federated Learning
Tunable Soft Prompts are Messengers in Federated Learning
Chenhe Dong
Yuexiang Xie
Bolin Ding
Ying Shen
Yaliang Li
FedML
46
7
0
12 Nov 2023
PECoP: Parameter Efficient Continual Pretraining for Action Quality
  Assessment
PECoP: Parameter Efficient Continual Pretraining for Action Quality Assessment
Amirhossein Dadashzadeh
Shuchao Duan
Alan Whone
Majid Mirmehdi
24
7
0
11 Nov 2023
Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Weiyang Liu
Zeju Qiu
Yao Feng
Yuliang Xiu
Yuxuan Xue
...
Songyou Peng
Yandong Wen
Michael J. Black
Adrian Weller
Bernhard Schölkopf
50
57
0
10 Nov 2023
Mini but Mighty: Finetuning ViTs with Mini Adapters
Mini but Mighty: Finetuning ViTs with Mini Adapters
Imad Eddine Marouf
Enzo Tartaglione
Stéphane Lathuilière
36
5
0
07 Nov 2023
Language Models are Super Mario: Absorbing Abilities from Homologous
  Models as a Free Lunch
Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Le Yu
Yu Bowen
Haiyang Yu
Fei Huang
Yongbin Li
MoMe
33
275
0
06 Nov 2023
Making Harmful Behaviors Unlearnable for Large Language Models
Making Harmful Behaviors Unlearnable for Large Language Models
Xin Zhou
Yi Lu
Ruotian Ma
Tao Gui
Qi Zhang
Xuanjing Huang
MU
41
9
0
02 Nov 2023
AdaSent: Efficient Domain-Adapted Sentence Embeddings for Few-Shot
  Classification
AdaSent: Efficient Domain-Adapted Sentence Embeddings for Few-Shot Classification
Yongxin Huang
Kexin Wang
Sourav Dutta
Raj Nath Patel
Goran Glavas
Iryna Gurevych
VLM
20
4
0
01 Nov 2023
Res-Tuning: A Flexible and Efficient Tuning Paradigm via Unbinding Tuner
  from Backbone
Res-Tuning: A Flexible and Efficient Tuning Paradigm via Unbinding Tuner from Backbone
Zeyinzi Jiang
Chaojie Mao
Ziyuan Huang
Ao Ma
Yiliang Lv
Yujun Shen
Deli Zhao
Jingren Zhou
32
15
0
30 Oct 2023
When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and
  Limitations
When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations
Aleksandar Petrov
Philip Torr
Adel Bibi
VPVLM
32
21
0
30 Oct 2023
FedPEAT: Convergence of Federated Learning, Parameter-Efficient Fine
  Tuning, and Emulator Assisted Tuning for Artificial Intelligence Foundation
  Models with Mobile Edge Computing
FedPEAT: Convergence of Federated Learning, Parameter-Efficient Fine Tuning, and Emulator Assisted Tuning for Artificial Intelligence Foundation Models with Mobile Edge Computing
Terence Jie Chua
Wen-li Yu
Junfeng Zhao
Kwok-Yan Lam
FedML
29
5
0
26 Oct 2023
Mixture-of-Linguistic-Experts Adapters for Improving and Interpreting
  Pre-trained Language Models
Mixture-of-Linguistic-Experts Adapters for Improving and Interpreting Pre-trained Language Models
Raymond Li
Gabriel Murray
Giuseppe Carenini
MoE
41
2
0
24 Oct 2023
Variator: Accelerating Pre-trained Models with Plug-and-Play Compression
  Modules
Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules
Chaojun Xiao
Yuqi Luo
Wenbin Zhang
Pengle Zhang
Xu Han
...
Zhengyan Zhang
Ruobing Xie
Zhiyuan Liu
Maosong Sun
Jie Zhou
30
0
0
24 Oct 2023
Confounder Balancing in Adversarial Domain Adaptation for Pre-Trained
  Large Models Fine-Tuning
Confounder Balancing in Adversarial Domain Adaptation for Pre-Trained Large Models Fine-Tuning
Shuoran Jiang
Qingcai Chen
Yang Xiang
Youcheng Pan
Xiangping Wu
AI4CE
21
0
0
24 Oct 2023
SteloCoder: a Decoder-Only LLM for Multi-Language to Python Code
  Translation
SteloCoder: a Decoder-Only LLM for Multi-Language to Python Code Translation
Jialing Pan
Adrien Sadé
Jin Kim
Eric Soriano
Guillem Sole
Sylvain Flamant
SyDa
13
16
0
24 Oct 2023
Orthogonal Subspace Learning for Language Model Continual Learning
Orthogonal Subspace Learning for Language Model Continual Learning
Xiao Wang
Tianze Chen
Qiming Ge
Han Xia
Rong Bao
Rui Zheng
Qi Zhang
Tao Gui
Xuanjing Huang
CLL
122
90
0
22 Oct 2023
PromptCBLUE: A Chinese Prompt Tuning Benchmark for the Medical Domain
PromptCBLUE: A Chinese Prompt Tuning Benchmark for the Medical Domain
Wei-wei Zhu
Xiaoling Wang
Huanran Zheng
Mosha Chen
Buzhou Tang
ELM
LM&MA
21
33
0
22 Oct 2023
Scalable Neural Network Kernels
Scalable Neural Network Kernels
Arijit Sehanobish
Krzysztof Choromanski
Yunfan Zhao
Kumar Avinava Dubey
Valerii Likhosherstov
41
5
0
20 Oct 2023
RSAdapter: Adapting Multimodal Models for Remote Sensing Visual Question
  Answering
RSAdapter: Adapting Multimodal Models for Remote Sensing Visual Question Answering
Yuduo Wang
Pedram Ghamisi
30
4
0
19 Oct 2023
Survival of the Most Influential Prompts: Efficient Black-Box Prompt
  Search via Clustering and Pruning
Survival of the Most Influential Prompts: Efficient Black-Box Prompt Search via Clustering and Pruning
Han Zhou
Xingchen Wan
Ivan Vulić
Anna Korhonen
LLMAG
28
15
0
19 Oct 2023
Uncertainty-aware Parameter-Efficient Self-training for Semi-supervised
  Language Understanding
Uncertainty-aware Parameter-Efficient Self-training for Semi-supervised Language Understanding
Jianing Wang
Qiushi Sun
Nuo Chen
Chengyu Wang
Jun Huang
Ming Gao
Xiang Li
UQLM
26
3
0
19 Oct 2023
Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning
Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning
Hao Zhao
Jie Fu
Zhaofeng He
105
6
0
18 Oct 2023
Domain Generalization Using Large Pretrained Models with
  Mixture-of-Adapters
Domain Generalization Using Large Pretrained Models with Mixture-of-Adapters
Gyuseong Lee
Wooseok Jang
Jin Hyeon Kim
Jaewoo Jung
Seungryong Kim
MoE
OOD
30
2
0
17 Oct 2023
PELA: Learning Parameter-Efficient Models with Low-Rank Approximation
PELA: Learning Parameter-Efficient Models with Low-Rank Approximation
Yangyang Guo
Guangzhi Wang
Mohan S. Kankanhalli
21
3
0
16 Oct 2023
Previous
123...678...111213
Next