Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2303.02861
Cited By
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
6 March 2023
Zhen Wang
Yikang Shen
Leonid Karlinsky
Rogerio Feris
Huan Sun
Yoon Kim
VLM
VPVLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning"
30 / 80 papers shown
Title
A Communication Theory Perspective on Prompting Engineering Methods for Large Language Models
Yuanfeng Song
Yuanqin He
Xuefang Zhao
Hanlin Gu
Di Jiang
Haijun Yang
Lixin Fan
Qiang Yang
37
3
0
24 Oct 2023
Non-Intrusive Adaptation: Input-Centric Parameter-efficient Fine-Tuning for Versatile Multimodal Modeling
Yaqing Wang
Jialin Wu
T. Dabral
Jiageng Zhang
Geoff Brown
...
Frederick Liu
Yi Liang
Bo Pang
Michael Bendersky
Radu Soricut
VLM
25
14
0
18 Oct 2023
Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning
Hao Zhao
Jie Fu
Zhaofeng He
105
6
0
18 Oct 2023
Decomposed Prompt Tuning via Low-Rank Reparameterization
Yao Xiao
Lu Xu
Jiaxi Li
Wei Lu
Xiaoli Li
VLM
25
6
0
16 Oct 2023
The Troubling Emergence of Hallucination in Large Language Models -- An Extensive Definition, Quantification, and Prescriptive Remediations
Vipula Rawte
Swagata Chakraborty
Agnibh Pathak
Anubhav Sarkar
S.M. Towhidul Islam Tonmoy
Aman Chadha
Mikel Artetxe
Punit Daniel Simig
HILM
40
119
0
08 Oct 2023
Fine-tune Language Models to Approximate Unbiased In-context Learning
Timothy Chu
Zhao Song
Chiwun Yang
27
15
0
05 Oct 2023
EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Diffusion Models
Yefei He
Jing Liu
Weijia Wu
Hong Zhou
Bohan Zhuang
DiffM
MQ
21
47
0
05 Oct 2023
Prompting-based Temporal Domain Generalization
Sepidehsadat Hosseini
Mengyao Zhai
Hossein Hajimirsadegh
Frederick Tung
AI4TS
AI4CE
OOD
25
0
0
03 Oct 2023
Zero-Shot Continuous Prompt Transfer: Generalizing Task Semantics Across Language Models
Zijun Wu
Yongkang Wu
Lili Mou
VLM
32
2
0
02 Oct 2023
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale
Markus Frohmann
Carolin Holtermann
Shahed Masoudian
Anne Lauscher
Navid Rekabsaz
37
2
0
02 Oct 2023
Exploring the Relationship between LLM Hallucinations and Prompt Linguistic Nuances: Readability, Formality, and Concreteness
Vipula Rawte
Prachi Priya
S.M. Towhidul Islam Tonmoy
M. M. Zaman
A. Sheth
Amitava Das
25
18
0
20 Sep 2023
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
Zhengxiang Shi
Aldo Lipani
VLM
31
30
0
11 Sep 2023
Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models
Zhiyuan Peng
Xuyang Wu
Qifan Wang
Yihan Fang
VLM
RALM
46
11
0
17 Jul 2023
Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks
Liam Collins
Hamed Hassani
Mahdi Soltanolkotabi
Aryan Mokhtari
Sanjay Shakkottai
39
10
0
13 Jul 2023
PromptIR: Prompting for All-in-One Blind Image Restoration
Vaishnav Potlapalli
Syed Waqas Zamir
Salman Khan
Fahad Shahbaz Khan
VLM
39
91
0
22 Jun 2023
Condensing Multilingual Knowledge with Lightweight Language-Specific Modules
Haoran Xu
Weiting Tan
Shuyue Stella Li
Yunmo Chen
Benjamin Van Durme
Philipp Koehn
Kenton W. Murray
21
6
0
23 May 2023
ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings
Shibo Hao
Tianyang Liu
Zhen Wang
Zhiting Hu
RALM
LLMAG
60
173
0
19 May 2023
G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning for Graph Transformer Networks
Anchun Gui
Jinqiang Ye
Han Xiao
27
19
0
17 May 2023
Soft Prompt Decoding for Multilingual Dense Retrieval
Zhiqi Huang
Hansi Zeng
Hamed Zamani
James Allan
RALM
63
13
0
15 May 2023
EPVT: Environment-aware Prompt Vision Transformer for Domain Generalization in Skin Lesion Recognition
Siyuan Yan
Chih-Chen Liu
Zhen Yu
Lie Ju
Dwarikanath Mahapatrainst
Victoria Mar
Monika Janda
Peter Soyer
Z. Ge
ViT
MedIm
VLM
41
9
0
04 Apr 2023
ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning
Shachar Don-Yehiya
Elad Venezian
Colin Raffel
Noam Slonim
Yoav Katz
Leshem Choshen
MoMe
28
52
0
02 Dec 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
213
1,661
0
15 Oct 2021
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Cer
VLM
LRM
137
277
0
15 Oct 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
238
808
0
14 Oct 2021
Multi-Task Learning in Natural Language Processing: An Overview
Shijie Chen
Yu Zhang
Qiang Yang
AIMat
41
99
0
19 Sep 2021
CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP
Qinyuan Ye
Bill Yuchen Lin
Xiang Ren
220
180
0
18 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,858
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
243
1,924
0
31 Dec 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
258
1,589
0
21 Jan 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,984
0
20 Apr 2018
Previous
1
2