ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.04366
  4. Cited By
Towards a Unified View of Parameter-Efficient Transfer Learning

Towards a Unified View of Parameter-Efficient Transfer Learning

8 October 2021
Junxian He
Chunting Zhou
Xuezhe Ma
Taylor Berg-Kirkpatrick
Graham Neubig
    AAML
ArXivPDFHTML

Papers citing "Towards a Unified View of Parameter-Efficient Transfer Learning"

50 / 633 papers shown
Title
RanPAC: Random Projections and Pre-trained Models for Continual Learning
RanPAC: Random Projections and Pre-trained Models for Continual Learning
Mark D Mcdonnell
Dong Gong
Amin Parvaneh
Ehsan Abbasnejad
Anton Van Den Hengel
VLM
CLL
29
95
0
05 Jul 2023
Visual Instruction Tuning with Polite Flamingo
Visual Instruction Tuning with Polite Flamingo
Delong Chen
Jianfeng Liu
Wenliang Dai
Baoyuan Wang
MLLM
34
42
0
03 Jul 2023
Approximated Prompt Tuning for Vision-Language Pre-trained Models
Approximated Prompt Tuning for Vision-Language Pre-trained Models
Qiong Wu
Shubin Huang
Yiyi Zhou
Pingyang Dai
Annan Shu
Guannan Jiang
Rongrong Ji
VLM
VPVLM
25
2
0
27 Jun 2023
Composing Parameter-Efficient Modules with Arithmetic Operations
Composing Parameter-Efficient Modules with Arithmetic Operations
Jinghan Zhang
Shiqi Chen
Junteng Liu
Junxian He
KELM
MoMe
31
109
0
26 Jun 2023
Listener Model for the PhotoBook Referential Game with CLIPScores as
  Implicit Reference Chain
Listener Model for the PhotoBook Referential Game with CLIPScores as Implicit Reference Chain
Shih-Lun Wu
Yi-Hui Chou
Liang Li
15
0
0
16 Jun 2023
Efficient Adapters for Giant Speech Models
Efficient Adapters for Giant Speech Models
Nanxin Chen
Izhak Shafran
Yu Zhang
Chung-Cheng Chiu
H. Soltau
James Qin
Yonghui Wu
22
10
0
13 Jun 2023
One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning
One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning
Arnav Chavan
Zhuang Liu
D. K. Gupta
Eric P. Xing
Zhiqiang Shen
30
87
0
13 Jun 2023
Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to
  Pre-trained Language Models Memories
Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models Memories
Shizhe Diao
Tianyang Xu
Ruijia Xu
Jiawei Wang
Tong Zhang
MoE
AI4CE
13
36
0
08 Jun 2023
PEFT-SER: On the Use of Parameter Efficient Transfer Learning Approaches
  For Speech Emotion Recognition Using Pre-trained Speech Models
PEFT-SER: On the Use of Parameter Efficient Transfer Learning Approaches For Speech Emotion Recognition Using Pre-trained Speech Models
Tiantian Feng
Shrikanth Narayanan
33
27
0
08 Jun 2023
Prompter: Zero-shot Adaptive Prefixes for Dialogue State Tracking Domain
  Adaptation
Prompter: Zero-shot Adaptive Prefixes for Dialogue State Tracking Domain Adaptation
Taha İbrahim Aksu
MingSung Kan
Nancy F. Chen
VLM
23
9
0
07 Jun 2023
Git-Theta: A Git Extension for Collaborative Development of Machine
  Learning Models
Git-Theta: A Git Extension for Collaborative Development of Machine Learning Models
Nikhil Kandpal
Brian Lester
Mohammed Muqeeth
Anisha Mascarenhas
Monty Evans
Vishal Baskaran
Tenghao Huang
Haokun Liu
Colin Raffel
VLM
21
10
0
07 Jun 2023
An Empirical Analysis of Parameter-Efficient Methods for Debiasing
  Pre-Trained Language Models
An Empirical Analysis of Parameter-Efficient Methods for Debiasing Pre-Trained Language Models
Zhongbin Xie
Thomas Lukasiewicz
28
12
0
06 Jun 2023
Cross-Lingual Transfer with Target Language-Ready Task Adapters
Cross-Lingual Transfer with Target Language-Ready Task Adapters
Marinela Parović
Alan Ansell
Ivan Vulić
Anna Korhonen
37
9
0
05 Jun 2023
Prompt to be Consistent is Better than Self-Consistent? Few-Shot and
  Zero-Shot Fact Verification with Pre-trained Language Models
Prompt to be Consistent is Better than Self-Consistent? Few-Shot and Zero-Shot Fact Verification with Pre-trained Language Models
Fengzhu Zeng
Wei Gao
21
5
0
05 Jun 2023
Exploring the Impact of Model Scaling on Parameter-Efficient Tuning
Exploring the Impact of Model Scaling on Parameter-Efficient Tuning
Yusheng Su
Chi-Min Chan
Jiali Cheng
Yujia Qin
Yankai Lin
...
Ning Ding
Xingzhi Sun
Guotong Xie
Zhiyuan Liu
Maosong Sun
24
6
0
04 Jun 2023
Speech Translation with Foundation Models and Optimal Transport: UPC at
  IWSLT23
Speech Translation with Foundation Models and Optimal Transport: UPC at IWSLT23
Ioannis Tsiamas
Gerard I. Gállego
José A. R. Fonollosa
Marta R. Costa-jussá
OT
16
3
0
02 Jun 2023
DeepFake-Adapter: Dual-Level Adapter for DeepFake Detection
DeepFake-Adapter: Dual-Level Adapter for DeepFake Detection
Rui Shao
Tianxing Wu
Liqiang Nie
Ziwei Liu
28
11
0
01 Jun 2023
Make Pre-trained Model Reversible: From Parameter to Memory Efficient
  Fine-Tuning
Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning
Baohao Liao
Shaomu Tan
Christof Monz
KELM
23
29
0
01 Jun 2023
Jointly Reparametrized Multi-Layer Adaptation for Efficient and Private
  Tuning
Jointly Reparametrized Multi-Layer Adaptation for Efficient and Private Tuning
Umang Gupta
Aram Galstyan
Greg Ver Steeg
11
2
0
30 May 2023
Domain Specialization as the Key to Make Large Language Models
  Disruptive: A Comprehensive Survey
Domain Specialization as the Key to Make Large Language Models Disruptive: A Comprehensive Survey
Chen Ling
Xujiang Zhao
Jiaying Lu
Chengyuan Deng
Can Zheng
...
Chris White
Quanquan Gu
Jian Pei
Carl Yang
Liang Zhao
ALM
33
126
0
30 May 2023
One Network, Many Masks: Towards More Parameter-Efficient Transfer
  Learning
One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning
Guangtao Zeng
Peiyuan Zhang
Wei Lu
21
21
0
28 May 2023
CrossGET: Cross-Guided Ensemble of Tokens for Accelerating
  Vision-Language Transformers
CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers
Dachuan Shi
Chaofan Tao
Anyi Rao
Zhendong Yang
Chun Yuan
Jiaqi Wang
VLM
34
22
0
27 May 2023
Parameter-Efficient Fine-Tuning without Introducing New Latency
Parameter-Efficient Fine-Tuning without Introducing New Latency
Baohao Liao
Yan Meng
Christof Monz
18
49
0
26 May 2023
Code-Switched Text Synthesis in Unseen Language Pairs
Code-Switched Text Synthesis in Unseen Language Pairs
I-Hung Hsu
Avik Ray
Shubham Garg
Nanyun Peng
Jing Huang
27
3
0
26 May 2023
Neural Architecture Search for Parameter-Efficient Fine-tuning of Large
  Pre-trained Language Models
Neural Architecture Search for Parameter-Efficient Fine-tuning of Large Pre-trained Language Models
Neal Lawton
Anoop Kumar
Govind Thattai
Aram Galstyan
Greg Ver Steeg
25
16
0
26 May 2023
READ: Recurrent Adaptation of Large Transformers
READ: Recurrent Adaptation of Large Transformers
Sida I. Wang
John Nguyen
Ke Li
Carole-Jean Wu
28
11
0
24 May 2023
Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model
  Fine-tuning
Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model Fine-tuning
Zhen-Ru Zhang
Chuanqi Tan
Haiyang Xu
Chengyu Wang
Jun Huang
Songfang Huang
33
29
0
24 May 2023
Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large
  Language Models
Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models
Gen Luo
Yiyi Zhou
Tianhe Ren
Shen Chen
Xiaoshuai Sun
Rongrong Ji
VLM
MLLM
31
89
0
24 May 2023
Parameter-Efficient Language Model Tuning with Active Learning in
  Low-Resource Settings
Parameter-Efficient Language Model Tuning with Active Learning in Low-Resource Settings
Josip Jukić
Jan vSnajder
39
4
0
23 May 2023
Few-shot Unified Question Answering: Tuning Models or Prompts?
Few-shot Unified Question Answering: Tuning Models or Prompts?
Srijan Bansal
Semih Yavuz
Bo Pang
Meghana Moorthy Bhat
Yingbo Zhou
36
2
0
23 May 2023
VIP5: Towards Multimodal Foundation Models for Recommendation
VIP5: Towards Multimodal Foundation Models for Recommendation
Shijie Geng
Juntao Tan
Shuchang Liu
Zuohui Fu
Yongfeng Zhang
32
70
0
23 May 2023
mmT5: Modular Multilingual Pre-Training Solves Source Language
  Hallucinations
mmT5: Modular Multilingual Pre-Training Solves Source Language Hallucinations
Jonas Pfeiffer
Francesco Piccinno
Massimo Nicosia
Xinyi Wang
Machel Reid
Sebastian Ruder
VLM
LRM
36
27
0
23 May 2023
In-Context Probing: Toward Building Robust Classifiers via Probing Large
  Language Models
In-Context Probing: Toward Building Robust Classifiers via Probing Large Language Models
Afra Amini
Massimiliano Ciaramita
ReLM
25
1
0
23 May 2023
Memory-Efficient Fine-Tuning of Compressed Large Language Models via
  sub-4-bit Integer Quantization
Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization
Jeonghoon Kim
J. H. Lee
Sungdong Kim
Joonsuk Park
Kang Min Yoo
S. Kwon
Dongsoo Lee
MQ
44
99
0
23 May 2023
Global Structure Knowledge-Guided Relation Extraction Method for
  Visually-Rich Document
Global Structure Knowledge-Guided Relation Extraction Method for Visually-Rich Document
Xiangnan Chen
Qianwen Xiao
Juncheng Li
Duo Dong
Jun Lin
Xiaozhong Liu
Siliang Tang
34
5
0
23 May 2023
ReWOO: Decoupling Reasoning from Observations for Efficient Augmented
  Language Models
ReWOO: Decoupling Reasoning from Observations for Efficient Augmented Language Models
Binfeng Xu
Zhiyuan Peng
Bowen Lei
Subhabrata Mukherjee
Yuchen Liu
Dongkuan Xu
KELM
LLMAG
LRM
32
90
0
23 May 2023
Small Language Models Improve Giants by Rewriting Their Outputs
Small Language Models Improve Giants by Rewriting Their Outputs
Giorgos Vernikos
Arthur Bravzinskas
Jakub Adamek
Jonathan Mallinson
Aliaksei Severyn
Eric Malmi
BDL
LRM
33
14
0
22 May 2023
Modular Domain Adaptation for Conformer-Based Streaming ASR
Modular Domain Adaptation for Conformer-Based Streaming ASR
Qiujia Li
Bo-wen Li
DongSeon Hwang
Tara N. Sainath
P. M. Mengibar
34
12
0
22 May 2023
DADA: Dialect Adaptation via Dynamic Aggregation of Linguistic Rules
DADA: Dialect Adaptation via Dynamic Aggregation of Linguistic Rules
Yanchen Liu
William B. Held
Diyi Yang
53
10
0
22 May 2023
VLAB: Enhancing Video Language Pre-training by Feature Adapting and
  Blending
VLAB: Enhancing Video Language Pre-training by Feature Adapting and Blending
Xingjian He
Sihan Chen
Fan Ma
Zhicheng Huang
Xiaojie Jin
Zikang Liu
Dongmei Fu
Yi Yang
Jiaheng Liu
Jiashi Feng
VLM
CLIP
23
17
0
22 May 2023
Nearest Neighbor Machine Translation is Meta-Optimizer on Output
  Projection Layer
Nearest Neighbor Machine Translation is Meta-Optimizer on Output Projection Layer
R. Gao
Zhirui Zhang
Yichao Du
Lemao Liu
Rui Wang
29
2
0
22 May 2023
Prefix Propagation: Parameter-Efficient Tuning for Long Sequences
Prefix Propagation: Parameter-Efficient Tuning for Long Sequences
Jonathan Li
Will Aitken
R. Bhambhoria
Xiao-Dan Zhu
17
14
0
20 May 2023
Parameter-Efficient Learning for Text-to-Speech Accent Adaptation
Parameter-Efficient Learning for Text-to-Speech Accent Adaptation
Lijie Yang
Chao-Han Huck Yang
Jen-Tzung Chien
22
11
0
18 May 2023
A Parameter-Efficient Learning Approach to Arabic Dialect Identification
  with Pre-Trained General-Purpose Speech Model
A Parameter-Efficient Learning Approach to Arabic Dialect Identification with Pre-Trained General-Purpose Speech Model
S. Radhakrishnan
Chao-Han Huck Yang
S. Khan
N. Kiani
D. Gómez-Cabrero
Jesper N. Tegnér
28
12
0
18 May 2023
G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning
  for Graph Transformer Networks
G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning for Graph Transformer Networks
Anchun Gui
Jinqiang Ye
Han Xiao
27
19
0
17 May 2023
Parameter-Efficient Fine-Tuning for Medical Image Analysis: The Missed
  Opportunity
Parameter-Efficient Fine-Tuning for Medical Image Analysis: The Missed Opportunity
Raman Dutt
Linus Ericsson
Pedro Sanchez
Sotirios A. Tsaftaris
Timothy M. Hospedales
MedIm
27
50
0
14 May 2023
A Comprehensive Analysis of Adapter Efficiency
A Comprehensive Analysis of Adapter Efficiency
Nandini Mundra
Sumanth Doddapaneni
Raj Dabre
Anoop Kunchukuttan
Ratish Puduppully
Mitesh M. Khapra
26
10
0
12 May 2023
ArtGPT-4: Towards Artistic-understanding Large Vision-Language Models
  with Enhanced Adapter
ArtGPT-4: Towards Artistic-understanding Large Vision-Language Models with Enhanced Adapter
Zheng Yuan
HU Xue
Kun Wang
Yongming Liu
Kun Wang
VLM
MLLM
29
5
0
12 May 2023
Towards Versatile and Efficient Visual Knowledge Integration into
  Pre-trained Language Models with Cross-Modal Adapters
Towards Versatile and Efficient Visual Knowledge Integration into Pre-trained Language Models with Cross-Modal Adapters
Xinyun Zhang
Haochen Tan
Han Wu
Bei Yu
KELM
15
1
0
12 May 2023
LACoS-BLOOM: Low-rank Adaptation with Contrastive objective on 8 bits
  Siamese-BLOOM
LACoS-BLOOM: Low-rank Adaptation with Contrastive objective on 8 bits Siamese-BLOOM
Wenhui Hua
Brian Williams
Davood Shamsi
28
3
0
10 May 2023
Previous
123...1011121389
Next