ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.07904
  4. Cited By
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer

SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer

15 October 2021
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Cer
    VLM
    LRM
ArXivPDFHTML

Papers citing "SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer"

50 / 89 papers shown
Title
Learning Optimal Prompt Ensemble for Multi-source Visual Prompt Transfer
Learning Optimal Prompt Ensemble for Multi-source Visual Prompt Transfer
Enming Zhang
Liwen Cao
Yanru Wu
Zijie Zhao
Guan Wang
Yang Li
80
0
0
09 Apr 2025
CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System
CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System
Yashar Deldjoo
Tommaso Di Noia
135
22
0
24 Feb 2025
NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models
NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models
Yibo Zhong
Haoxiang Jiang
Lincan Li
Ryumei Nakada
Tianci Liu
Linjun Zhang
Huaxiu Yao
Haoyu Wang
151
2
0
24 Feb 2025
SecPE: Secure Prompt Ensembling for Private and Robust Large Language Models
SecPE: Secure Prompt Ensembling for Private and Robust Large Language Models
Jiawen Zhang
Kejia Chen
Zunlei Feng
Jian Lou
Mingli Song
Qingbin Liu
Xiaoyu Yang
AAML
SILM
FedML
72
1
0
02 Feb 2025
BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models
BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models
Yibin Wang
Haizhou Shi
Ligong Han
Dimitris N. Metaxas
Hao Wang
BDL
UQLM
129
8
0
28 Jan 2025
Parameter-Efficient Fine-Tuning for Foundation Models
Parameter-Efficient Fine-Tuning for Foundation Models
Dan Zhang
Tao Feng
Lilong Xue
Yuandong Wang
Yuxiao Dong
J. Tang
112
9
0
23 Jan 2025
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
Liwen Wang
Sheng Chen
Linnan Jiang
Shu Pan
Runze Cai
Sen Yang
Fei Yang
83
5
0
24 Oct 2024
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models
Fanxu Meng
Zhaohui Wang
Muhan Zhang
VLM
96
92
0
03 Apr 2024
STraTA: Self-Training with Task Augmentation for Better Few-shot
  Learning
STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
Tu Vu
Minh-Thang Luong
Quoc V. Le
Grady Simon
Mohit Iyyer
142
61
0
13 Sep 2021
PPT: Pre-trained Prompt Tuning for Few-shot Learning
PPT: Pre-trained Prompt Tuning for Few-shot Learning
Yuxian Gu
Xu Han
Zhiyuan Liu
Minlie Huang
VLM
71
409
0
09 Sep 2021
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods
  in Natural Language Processing
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu
Weizhe Yuan
Jinlan Fu
Zhengbao Jiang
Hiroaki Hayashi
Graham Neubig
VLM
SyDa
157
3,906
0
28 Jul 2021
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based
  Masked Language-models
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models
Elad Ben-Zaken
Shauli Ravfogel
Yoav Goldberg
130
1,191
0
18 Jun 2021
LoRA: Low-Rank Adaptation of Large Language Models
LoRA: Low-Rank Adaptation of Large Language Models
J. E. Hu
Yelong Shen
Phillip Wallis
Zeyuan Allen-Zhu
Yuanzhi Li
Shean Wang
Lu Wang
Weizhu Chen
OffRL
AI4TS
AI4CE
ALM
AIMat
191
9,946
0
17 Jun 2021
DocNLI: A Large-scale Dataset for Document-level Natural Language
  Inference
DocNLI: A Large-scale Dataset for Document-level Natural Language Inference
Wenpeng Yin
Dragomir R. Radev
Caiming Xiong
HILM
34
98
0
17 Jun 2021
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Rabeeh Karimi Mahabadi
James Henderson
Sebastian Ruder
MoE
79
479
0
08 Jun 2021
Parameter-efficient Multi-task Fine-tuning for Transformers via Shared
  Hypernetworks
Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks
Rabeeh Karimi Mahabadi
Sebastian Ruder
Mostafa Dehghani
James Henderson
MoE
46
302
0
08 Jun 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
440
3,952
0
18 Apr 2021
What to Pre-Train on? Efficient Intermediate Task Selection
What to Pre-Train on? Efficient Intermediate Task Selection
Clifton A. Poth
Jonas Pfeiffer
Andreas Rucklé
Iryna Gurevych
37
95
0
16 Apr 2021
Learning How to Ask: Querying LMs with Mixtures of Soft Prompts
Learning How to Ask: Querying LMs with Mixtures of Soft Prompts
Guanghui Qin
J. Eisner
29
538
0
14 Apr 2021
UNICORN on RAINBOW: A Universal Commonsense Reasoning Model on a New
  Multitask Benchmark
UNICORN on RAINBOW: A Universal Commonsense Reasoning Model on a New Multitask Benchmark
Nicholas Lourie
Ronan Le Bras
Chandra Bhagavatula
Yejin Choi
LRM
55
138
0
24 Mar 2021
GPT Understands, Too
GPT Understands, Too
Xiao Liu
Yanan Zheng
Zhengxiao Du
Ming Ding
Yujie Qian
Zhilin Yang
Jie Tang
VLM
125
1,161
0
18 Mar 2021
The GEM Benchmark: Natural Language Generation, its Evaluation and
  Metrics
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
Sebastian Gehrmann
Tosin Adewumi
Karmanya Aggarwal
Pawan Sasanka Ammanamanchi
Aremu Anuoluwapo
...
Nishant Subramani
Wei Xu
Diyi Yang
Akhila Yerukola
Jiawei Zhou
VLM
276
285
0
02 Feb 2021
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Xiang Lisa Li
Percy Liang
159
4,167
0
01 Jan 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
282
345
0
01 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
324
1,950
0
31 Dec 2020
WikiLingua: A New Benchmark Dataset for Cross-Lingual Abstractive
  Summarization
WikiLingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization
Faisal Ladhak
Esin Durmus
Claire Cardie
Kathleen McKeown
CVBM
42
198
0
07 Oct 2020
It's Not Just Size That Matters: Small Language Models Are Also Few-Shot
  Learners
It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners
Timo Schick
Hinrich Schütze
81
966
0
15 Sep 2020
DART: Open-Domain Structured Data Record to Text Generation
DART: Open-Domain Structured Data Record to Text Generation
Linyong Nan
Dragomir R. Radev
Rui Zhang
Amrit Rau
Abhinand Sivaprasad
...
Y. Tan
Xi Lin
Caiming Xiong
R. Socher
Nazneen Rajani
32
201
0
06 Jul 2020
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
Pengcheng He
Xiaodong Liu
Jianfeng Gao
Weizhu Chen
AAML
98
2,682
0
05 Jun 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
445
41,106
0
28 May 2020
Movement Pruning: Adaptive Sparsity by Fine-Tuning
Movement Pruning: Adaptive Sparsity by Fine-Tuning
Victor Sanh
Thomas Wolf
Alexander M. Rush
57
475
0
15 May 2020
Neural CRF Model for Sentence Alignment in Text Simplification
Neural CRF Model for Sentence Alignment in Text Simplification
Chao Jiang
Mounica Maddela
Wuwei Lan
Yang Zhong
Wenyuan Xu
44
160
0
05 May 2020
Intermediate-Task Transfer Learning with Pretrained Models for Natural
  Language Understanding: When and Why Does It Work?
Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
Yada Pruksachatkun
Jason Phang
Haokun Liu
Phu Mon Htut
Xiaoyi Zhang
Richard Yuanzhe Pang
Clara Vania
Katharina Kann
Samuel R. Bowman
CLL
LRM
21
194
0
01 May 2020
GoEmotions: A Dataset of Fine-Grained Emotions
GoEmotions: A Dataset of Fine-Grained Emotions
Dorottya Demszky
Dana Movshovitz-Attias
Jeongwoo Ko
Alan S. Cowen
Gaurav Nemade
Sujith Ravi
AI4MH
34
695
0
01 May 2020
Crisscrossed Captions: Extended Intramodal and Intermodal Semantic
  Similarity Judgments for MS-COCO
Crisscrossed Captions: Extended Intramodal and Intermodal Semantic Similarity Judgments for MS-COCO
Zarana Parekh
Jason Baldridge
Daniel Cer
Austin Waters
Yinfei Yang
28
61
0
30 Apr 2020
MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices
MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices
Zhiqing Sun
Hongkun Yu
Xiaodan Song
Renjie Liu
Yiming Yang
Denny Zhou
MQ
79
807
0
06 Apr 2020
Neural Data Server: A Large-Scale Search Engine for Transfer Learning
  Data
Neural Data Server: A Large-Scale Search Engine for Transfer Learning Data
Xi Yan
David Acuna
Sanja Fidler
84
43
0
09 Jan 2020
How Can We Know What Language Models Know?
How Can We Know What Language Models Know?
Zhengbao Jiang
Frank F. Xu
Jun Araki
Graham Neubig
KELM
86
1,387
0
28 Nov 2019
SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive
  Summarization
SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
Bogdan Gliwa
Iwona Mochol
M. Biesek
A. Wawer
88
623
0
27 Nov 2019
PIQA: Reasoning about Physical Commonsense in Natural Language
PIQA: Reasoning about Physical Commonsense in Natural Language
Yonatan Bisk
Rowan Zellers
Ronan Le Bras
Jianfeng Gao
Yejin Choi
OOD
LRM
74
1,724
0
26 Nov 2019
Semantic Noise Matters for Neural Natural Language Generation
Semantic Noise Matters for Neural Natural Language Generation
Ondrej Dusek
David M. Howcroft
Verena Rieser
60
118
0
10 Nov 2019
Adversarial NLI: A New Benchmark for Natural Language Understanding
Adversarial NLI: A New Benchmark for Natural Language Understanding
Yixin Nie
Adina Williams
Emily Dinan
Joey Tianyi Zhou
Jason Weston
Douwe Kiela
91
991
0
31 Oct 2019
Exploring the Limits of Transfer Learning with a Unified Text-to-Text
  Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
249
19,824
0
23 Oct 2019
MRQA 2019 Shared Task: Evaluating Generalization in Reading
  Comprehension
MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension
Adam Fisch
Alon Talmor
Robin Jia
Minjoon Seo
Eunsol Choi
Danqi Chen
48
304
0
22 Oct 2019
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and
  lighter
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Victor Sanh
Lysandre Debut
Julien Chaumond
Thomas Wolf
115
7,386
0
02 Oct 2019
BillSum: A Corpus for Automatic Summarization of US Legislation
BillSum: A Corpus for Automatic Summarization of US Legislation
Anastassia Kornilova
Vladimir Eidelman
AILaw
29
155
0
01 Oct 2019
ALBERT: A Lite BERT for Self-supervised Learning of Language
  Representations
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Zhenzhong Lan
Mingda Chen
Sebastian Goodman
Kevin Gimpel
Piyush Sharma
Radu Soricut
SSL
AIMat
244
6,420
0
26 Sep 2019
Reducing Transformer Depth on Demand with Structured Dropout
Reducing Transformer Depth on Demand with Structured Dropout
Angela Fan
Edouard Grave
Armand Joulin
86
586
0
25 Sep 2019
TinyBERT: Distilling BERT for Natural Language Understanding
TinyBERT: Distilling BERT for Natural Language Understanding
Xiaoqi Jiao
Yichun Yin
Lifeng Shang
Xin Jiang
Xiao Chen
Linlin Li
F. Wang
Qun Liu
VLM
44
1,838
0
23 Sep 2019
Towards Scalable Multi-domain Conversational Agents: The Schema-Guided
  Dialogue Dataset
Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset
Abhinav Rastogi
Xiaoxue Zang
Srinivas Sunkara
Raghav Gupta
Pranav Khaitan
60
609
0
12 Sep 2019
12
Next