Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2211.06840
Cited By
FPT: Improving Prompt Tuning Efficiency via Progressive Training
13 November 2022
Yufei Huang
Yujia Qin
Huadong Wang
Yichun Yin
Maosong Sun
Zhiyuan Liu
Qun Liu
VLM
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"FPT: Improving Prompt Tuning Efficiency via Progressive Training"
7 / 7 papers shown
Title
Enhancing Low-Resource Relation Representations through Multi-View Decoupling
Chenghao Fan
Wei Wei
Xiaoye Qu
Zhenyi Lu
Wenfeng Xie
Yu Cheng
Dangyang Chen
26
2
0
26 Dec 2023
Adaptive Shortcut Debiasing for Online Continual Learning
Doyoung Kim
Dongmin Park
Yooju Shin
Jihwan Bang
Hwanjun Song
Jae-Gil Lee
CLL
52
2
0
14 Dec 2023
FedYolo: Augmenting Federated Learning with Pretrained Transformers
Xuechen Zhang
Mingchen Li
Xiangyu Chang
Jiasi Chen
A. Roy-Chowdhury
A. Suresh
Samet Oymak
FedML
31
7
0
10 Jul 2023
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Cer
VLM
LRM
137
277
0
15 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
On the Transformer Growth for Progressive BERT Training
Xiaotao Gu
Liyuan Liu
Hongkun Yu
Jing Li
Cheng Chen
Jiawei Han
VLM
66
51
0
23 Oct 2020
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
Tianlong Chen
Jonathan Frankle
Shiyu Chang
Sijia Liu
Yang Zhang
Zhangyang Wang
Michael Carbin
153
345
0
23 Jul 2020
1