ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.08691
  4. Cited By
The Power of Scale for Parameter-Efficient Prompt Tuning
v1v2 (latest)

The Power of Scale for Parameter-Efficient Prompt Tuning

18 April 2021
Brian Lester
Rami Al-Rfou
Noah Constant
    VPVLM
ArXiv (abs)PDFHTML

Papers citing "The Power of Scale for Parameter-Efficient Prompt Tuning"

50 / 2,607 papers shown
Title
Towards Realistic Low-resource Relation Extraction: A Benchmark with
  Empirical Baseline Study
Towards Realistic Low-resource Relation Extraction: A Benchmark with Empirical Baseline Study
Xin Xu
Xiang Chen
Ningyu Zhang
Xin Xie
Xi Chen
Huajun Chen
105
10
0
19 Oct 2022
CPL: Counterfactual Prompt Learning for Vision and Language Models
CPL: Counterfactual Prompt Learning for Vision and Language Models
Xuehai He
Diji Yang
Weixi Feng
Tsu-Jui Fu
Arjun Reddy Akula
Varun Jampani
P. Narayana
Sugato Basu
William Yang Wang
Xinze Wang
VPVLMVLM
100
15
0
19 Oct 2022
Revision Transformers: Instructing Language Models to Change their
  Values
Revision Transformers: Instructing Language Models to Change their Values
Felix Friedrich
Wolfgang Stammer
P. Schramowski
Kristian Kersting
KELM
75
8
0
19 Oct 2022
Continued Pretraining for Better Zero- and Few-Shot Promptability
Continued Pretraining for Better Zero- and Few-Shot Promptability
Zhaofeng Wu
IV RobertL.Logan
Pete Walsh
Akshita Bhagia
Dirk Groeneveld
Sameer Singh
Iz Beltagy
VLM
108
12
0
19 Oct 2022
Tiny-Attention Adapter: Contexts Are More Important Than the Number of
  Parameters
Tiny-Attention Adapter: Contexts Are More Important Than the Number of Parameters
Hongyu Zhao
Hao Tan
Hongyuan Mei
MoE
81
18
0
18 Oct 2022
DisCup: Discriminator Cooperative Unlikelihood Prompt-tuning for
  Controllable Text Generation
DisCup: Discriminator Cooperative Unlikelihood Prompt-tuning for Controllable Text Generation
Hanqing Zhang
Dawei Song
89
37
0
18 Oct 2022
Using Bottleneck Adapters to Identify Cancer in Clinical Notes under
  Low-Resource Constraints
Using Bottleneck Adapters to Identify Cancer in Clinical Notes under Low-Resource Constraints
Omid Rohanian
Hannah Jauncey
Mohammadmahdi Nouriborji
Vinod Kumar Chauhan
Bronner P. Gonccalves
Christiana Kartsonaki
Isaric Clinical Characterisation Group
L. Merson
David Clifton
59
7
0
17 Oct 2022
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Mirac Suzgun
Nathan Scales
Nathanael Scharli
Sebastian Gehrmann
Yi Tay
...
Aakanksha Chowdhery
Quoc V. Le
Ed H. Chi
Denny Zhou
Jason W. Wei
ALMELMLRMReLM
292
1,143
0
17 Oct 2022
Prompting GPT-3 To Be Reliable
Prompting GPT-3 To Be Reliable
Chenglei Si
Zhe Gan
Zhengyuan Yang
Shuohang Wang
Jianfeng Wang
Jordan L. Boyd-Graber
Lijuan Wang
KELMLRM
113
303
0
17 Oct 2022
Multitask Pre-training of Modular Prompt for Chinese Few-Shot Learning
Multitask Pre-training of Modular Prompt for Chinese Few-Shot Learning
Tianxiang Sun
Zhengfu He
Qinen Zhu
Xipeng Qiu
Xuanjing Huang
VLMVPVLM
38
21
0
14 Oct 2022
DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic
  Search-Free Low-Rank Adaptation
DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation
Mojtaba Valipour
Mehdi Rezagholizadeh
I. Kobyzev
A. Ghodsi
160
185
0
14 Oct 2022
Unified Vision and Language Prompt Learning
Unified Vision and Language Prompt Learning
Yuhang Zang
Wei Li
Kaiyang Zhou
Chen Huang
Chen Change Loy
VLMVPVLM
80
151
0
13 Oct 2022
Are Sample-Efficient NLP Models More Robust?
Are Sample-Efficient NLP Models More Robust?
Nelson F. Liu
Ananya Kumar
Percy Liang
Robin Jia
VLMOOD
67
6
0
12 Oct 2022
MiniALBERT: Model Distillation via Parameter-Efficient Recursive
  Transformers
MiniALBERT: Model Distillation via Parameter-Efficient Recursive Transformers
Mohammadmahdi Nouriborji
Omid Rohanian
Samaneh Kouchaki
David Clifton
81
8
0
12 Oct 2022
A Kernel-Based View of Language Model Fine-Tuning
A Kernel-Based View of Language Model Fine-Tuning
Sadhika Malladi
Alexander Wettig
Dingli Yu
Danqi Chen
Sanjeev Arora
VLM
157
69
0
11 Oct 2022
Continual Training of Language Models for Few-Shot Learning
Continual Training of Language Models for Few-Shot Learning
Zixuan Ke
Haowei Lin
Yijia Shao
Hu Xu
Lei Shu
Bin Liu
KELMBDLCLL
138
36
0
11 Oct 2022
Transformers generalize differently from information stored in context
  vs in weights
Transformers generalize differently from information stored in context vs in weights
Stephanie C. Y. Chan
Ishita Dasgupta
Junkyung Kim
D. Kumaran
Andrew Kyle Lampinen
Felix Hill
216
50
0
11 Oct 2022
Knowledge Prompts: Injecting World Knowledge into Language Models
  through Soft Prompts
Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts
Cicero Nogueira dos Santos
Zhe Dong
Daniel Cer
John Nham
Siamak Shakeri
Jianmo Ni
Yun-hsuan Sung
VLMKELM
80
8
0
10 Oct 2022
XPrompt: Exploring the Extreme of Prompt Tuning
XPrompt: Exploring the Extreme of Prompt Tuning
Fang Ma
Chen Zhang
Lei Ren
Jingang Wang
Qifan Wang
Wei Wu
Xiaojun Quan
Dawei Song
VLM
150
39
0
10 Oct 2022
Parameter-Efficient Tuning with Special Token Adaptation
Parameter-Efficient Tuning with Special Token Adaptation
Xiaoocong Yang
James Y. Huang
Wenxuan Zhou
Muhao Chen
91
12
0
10 Oct 2022
SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency
  of Adapters
SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters
Shwai He
Liang Ding
Daize Dong
Miao Zhang
Dacheng Tao
MoE
137
91
0
09 Oct 2022
Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP
Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP
Feng Liang
Bichen Wu
Xiaoliang Dai
Kunpeng Li
Yinan Zhao
Hang Zhang
Peizhao Zhang
Peter Vajda
Diana Marculescu
CLIPVLM
135
460
0
09 Oct 2022
Data-Efficiency with a Single GPU: An Exploration of Transfer Methods
  for Small Language Models
Data-Efficiency with a Single GPU: An Exploration of Transfer Methods for Small Language Models
Alon Albalak
Akshat Shrivastava
Chinnadhurai Sankar
Adithya Sagar
Mike Ross
84
3
0
08 Oct 2022
AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of
  Large-Scale Pre-Trained Language Models
AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models
S. Kwon
Jeonghoon Kim
Jeongin Bae
Kang Min Yoo
Jin-Hwa Kim
Baeseong Park
Byeongwook Kim
Jung-Woo Ha
Nako Sung
Dongsoo Lee
MQ
119
31
0
08 Oct 2022
SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained
  Models
SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models
Omiros Pantazis
Gabriel J. Brostow
Kate E. Jones
Oisin Mac Aodha
VLM
82
42
0
07 Oct 2022
Few-Shot Anaphora Resolution in Scientific Protocols via Mixtures of
  In-Context Experts
Few-Shot Anaphora Resolution in Scientific Protocols via Mixtures of In-Context Experts
Nghia T. Le
Fan Bai
Alan Ritter
135
12
0
07 Oct 2022
A Unified Framework for Multi-intent Spoken Language Understanding with
  prompting
A Unified Framework for Multi-intent Spoken Language Understanding with prompting
Feifan Song
Lianzhe Huang
Houfeng Wang
56
3
0
07 Oct 2022
Polyhistor: Parameter-Efficient Multi-Task Adaptation for Dense Vision
  Tasks
Polyhistor: Parameter-Efficient Multi-Task Adaptation for Dense Vision Tasks
Yen-Cheng Liu
Chih-Yao Ma
Junjiao Tian
Zijian He
Z. Kira
160
52
0
07 Oct 2022
Prompt Compression and Contrastive Conditioning for Controllability and
  Toxicity Reduction in Language Models
Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models
David Wingate
Mohammad Shoeybi
Taylor Sorensen
91
77
0
06 Oct 2022
MaPLe: Multi-modal Prompt Learning
MaPLe: Multi-modal Prompt Learning
Muhammad Uzair Khattak
H. Rasheed
Muhammad Maaz
Salman Khan
Fahad Shahbaz Khan
VPVLMVLM
297
574
0
06 Oct 2022
Efficiently Enhancing Zero-Shot Performance of Instruction Following
  Model via Retrieval of Soft Prompt
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
Seonghyeon Ye
Joel Jang
Doyoung Kim
Yongrae Jo
Minjoon Seo
VLM
104
2
0
06 Oct 2022
Guess the Instruction! Flipped Learning Makes Language Models Stronger
  Zero-Shot Learners
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
Seonghyeon Ye
Doyoung Kim
Joel Jang
Joongbo Shin
Minjoon Seo
FedMLVLMUQCVLRM
113
25
0
06 Oct 2022
Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation
Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation
Xu Guo
Boyang Albert Li
Han Yu
VLM
121
24
0
06 Oct 2022
Reprogramming Pretrained Language Models for Antibody Sequence Infilling
Reprogramming Pretrained Language Models for Antibody Sequence Infilling
Igor Melnyk
Vijil Chenthamarakshan
Pin-Yu Chen
Payel Das
Amit Dhurandhar
Inkit Padhi
Devleena Das
75
33
0
05 Oct 2022
GLM-130B: An Open Bilingual Pre-trained Model
GLM-130B: An Open Bilingual Pre-trained Model
Aohan Zeng
Xiao Liu
Zhengxiao Du
Zihan Wang
Hanyu Lai
...
Jidong Zhai
Wenguang Chen
Peng Zhang
Yuxiao Dong
Jie Tang
BDLLRM
391
1,102
0
05 Oct 2022
Bayesian Prompt Learning for Image-Language Model Generalization
Bayesian Prompt Learning for Image-Language Model Generalization
Mohammad Mahdi Derakhshani
Enrique Sanchez
Adrian Bulat
Victor G. Turrisi da Costa
Cees G. M. Snoek
Georgios Tzimiropoulos
Brais Martínez
VPVLMVLM
171
37
0
05 Oct 2022
LASP: Text-to-Text Optimization for Language-Aware Soft Prompting of
  Vision & Language Models
LASP: Text-to-Text Optimization for Language-Aware Soft Prompting of Vision & Language Models
Adrian Bulat
Georgios Tzimiropoulos
VLMVPVLM
64
51
0
03 Oct 2022
LPT: Long-tailed Prompt Tuning for Image Classification
LPT: Long-tailed Prompt Tuning for Image Classification
Bowen Dong
Pan Zhou
Shuicheng Yan
W. Zuo
VPVLMVLM
177
61
0
03 Oct 2022
Visual Prompt Tuning for Generative Transfer Learning
Visual Prompt Tuning for Generative Transfer Learning
Kihyuk Sohn
Yuan Hao
José Lezama
Luisa F. Polanía
Huiwen Chang
Han Zhang
Irfan Essa
Lu Jiang
VPVLMVLM
161
89
0
03 Oct 2022
Differentially Private Bias-Term Fine-tuning of Foundation Models
Differentially Private Bias-Term Fine-tuning of Foundation Models
Zhiqi Bu
Yu Wang
Sheng Zha
George Karypis
135
48
0
30 Sep 2022
Universal Prompt Tuning for Graph Neural Networks
Universal Prompt Tuning for Graph Neural Networks
Taoran Fang
Yunchao Zhang
Yang Yang
Chunping Wang
Lei Chen
122
60
0
30 Sep 2022
Multi-Prompt Alignment for Multi-Source Unsupervised Domain Adaptation
Multi-Prompt Alignment for Multi-Source Unsupervised Domain Adaptation
Haoran Chen
Xintong Han
Zuxuan Wu
Yu-Gang Jiang
165
28
0
30 Sep 2022
What Makes Pre-trained Language Models Better Zero-shot Learners?
What Makes Pre-trained Language Models Better Zero-shot Learners?
Jinghui Lu
Dongsheng Zhu
Weidong Han
Rui Zhao
Brian Mac Namee
Fei Tan
101
24
0
30 Sep 2022
Linearly Mapping from Image to Text Space
Linearly Mapping from Image to Text Space
Jack Merullo
Louis Castricato
Carsten Eickhoff
Ellie Pavlick
VLM
248
118
0
30 Sep 2022
Bidirectional Language Models Are Also Few-shot Learners
Bidirectional Language Models Are Also Few-shot Learners
Ajay Patel
Bryan Li
Mohammad Sadegh Rasooli
Noah Constant
Colin Raffel
Chris Callison-Burch
LRM
140
47
0
29 Sep 2022
Towards Parameter-Efficient Integration of Pre-Trained Language Models
  In Temporal Video Grounding
Towards Parameter-Efficient Integration of Pre-Trained Language Models In Temporal Video Grounding
Erica K. Shimomoto
Edison Marrese-Taylor
Hiroya Takamura
Ichiro Kobayashi
Hideki Nakayama
Yusuke Miyao
88
7
0
26 Sep 2022
MetaPrompting: Learning to Learn Better Prompts
MetaPrompting: Learning to Learn Better Prompts
Yutai Hou
Hongyuan Dong
Xinghao Wang
Bohan Li
Wanxiang Che
VLM
83
30
0
23 Sep 2022
Prompting for a conversation: How to control a dialog model?
Prompting for a conversation: How to control a dialog model?
Josef Valvoda
Yimai Fang
David Vandyke
224
5
0
22 Sep 2022
A Few-shot Approach to Resume Information Extraction via Prompts
A Few-shot Approach to Resume Information Extraction via Prompts
Chengguang Gan
Tatsunori Mori
41
10
0
20 Sep 2022
Automatic Label Sequence Generation for Prompting Sequence-to-sequence
  Models
Automatic Label Sequence Generation for Prompting Sequence-to-sequence Models
Zichun Yu
Tianyu Gao
Zhengyan Zhang
Yankai Lin
Zhiyuan Liu
Maosong Sun
Jie Zhou
VLMLRM
51
1
0
20 Sep 2022
Previous
123...454647...515253
Next