ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.08691
  4. Cited By
The Power of Scale for Parameter-Efficient Prompt Tuning

The Power of Scale for Parameter-Efficient Prompt Tuning

18 April 2021
Brian Lester
Rami Al-Rfou
Noah Constant
    VPVLM
ArXivPDFHTML

Papers citing "The Power of Scale for Parameter-Efficient Prompt Tuning"

50 / 163 papers shown
Title
Mixture of Experts Meets Prompt-Based Continual Learning
Mixture of Experts Meets Prompt-Based Continual Learning
Minh Le
An Nguyen
Huy Nguyen
Trang Nguyen
Trang Pham
L. Ngo
Nhat Ho
CLL
88
12
0
23 May 2024
Implicit In-context Learning
Implicit In-context Learning
Zhuowei Li
Zihao Xu
Ligong Han
Yunhe Gao
Song Wen
Di Liu
Hao Wang
Dimitris N. Metaxas
98
3
0
23 May 2024
Large Language Models for Cyber Security: A Systematic Literature Review
Large Language Models for Cyber Security: A Systematic Literature Review
HanXiang Xu
Shenao Wang
Ningke Li
Kaidi Wang
Yanjie Zhao
Kai Chen
Ting Yu
Yang Liu
Haoyu Wang
104
40
0
08 May 2024
Do LLMs Understand Visual Anomalies? Uncovering LLM's Capabilities in Zero-shot Anomaly Detection
Do LLMs Understand Visual Anomalies? Uncovering LLM's Capabilities in Zero-shot Anomaly Detection
Jiaqi Zhu
Shaofeng Cai
Fang Deng
Junran Wu
Junran Wu
99
16
0
15 Apr 2024
AdapterSwap: Continuous Training of LLMs with Data Removal and Access-Control Guarantees
AdapterSwap: Continuous Training of LLMs with Data Removal and Access-Control Guarantees
William Fleshman
Aleem Khan
Marc Marone
Benjamin Van Durme
CLL
KELM
96
4
0
12 Apr 2024
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models
Fanxu Meng
Zhaohui Wang
Muhan Zhang
VLM
123
95
0
03 Apr 2024
A Survey on Large Language Model-Based Game Agents
A Survey on Large Language Model-Based Game Agents
Sihao Hu
Tiansheng Huang
Gaowen Liu
Ramana Rao Kompella
Gaowen Liu
Selim Furkan Tekin
Yichang Xu
Zachary Yahn
Ling Liu
LLMAG
LM&Ro
AI4CE
LM&MA
127
56
0
02 Apr 2024
CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small Language Models
CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small Language Models
Xuechen Liang
Meiling Tao
Yinghui Xia
Yiting Xie
Jun Wang
JingSong Yang
LLMAG
72
12
0
02 Apr 2024
ADAPT to Robustify Prompt Tuning Vision Transformers
ADAPT to Robustify Prompt Tuning Vision Transformers
Masih Eskandar
Tooba Imtiaz
Zifeng Wang
Jennifer Dy
VPVLM
VLM
AAML
69
0
0
19 Mar 2024
Explore In-Context Segmentation via Latent Diffusion Models
Explore In-Context Segmentation via Latent Diffusion Models
Chaoyang Wang
Xiangtai Li
Henghui Ding
Lu Qi
Jiangning Zhang
Yunhai Tong
Chen Change Loy
Shuicheng Yan
DiffM
112
6
0
14 Mar 2024
FlexLLM: A System for Co-Serving Large Language Model Inference and Parameter-Efficient Finetuning
FlexLLM: A System for Co-Serving Large Language Model Inference and Parameter-Efficient Finetuning
Xupeng Miao
Gabriele Oliaro
Xinhao Cheng
Vineeth Kada
Ruohan Gao
...
April Yang
Yingcheng Wang
Mengdi Wu
Colin Unger
Zhihao Jia
MoE
144
10
0
29 Feb 2024
MELoRA: Mini-Ensemble Low-Rank Adapters for Parameter-Efficient Fine-Tuning
MELoRA: Mini-Ensemble Low-Rank Adapters for Parameter-Efficient Fine-Tuning
Pengjie Ren
Chengshun Shi
Shiguang Wu
Mengqi Zhang
Zhaochun Ren
Maarten de Rijke
Zhumin Chen
Jiahuan Pei
MoE
160
14
0
27 Feb 2024
Why Lift so Heavy? Slimming Large Language Models by Cutting Off the Layers
Why Lift so Heavy? Slimming Large Language Models by Cutting Off the Layers
Shuzhou Yuan
Ercong Nie
Bolei Ma
Michael Farber
73
3
0
18 Feb 2024
Towards an Information Theoretic Framework of Context-Based Offline Meta-Reinforcement Learning
Towards an Information Theoretic Framework of Context-Based Offline Meta-Reinforcement Learning
Lanqing Li
Hai Zhang
Xinyu Zhang
Shatong Zhu
Junqiao Zhao
Junqiao Zhao
Pheng-Ann Heng
OffRL
76
8
0
04 Feb 2024
Private Fine-tuning of Large Language Models with Zeroth-order Optimization
Private Fine-tuning of Large Language Models with Zeroth-order Optimization
Xinyu Tang
Ashwinee Panda
Milad Nasr
Saeed Mahloujifar
Prateek Mittal
149
23
0
09 Jan 2024
Prompting Disentangled Embeddings for Knowledge Graph Completion with Pre-trained Language Model
Prompting Disentangled Embeddings for Knowledge Graph Completion with Pre-trained Language Model
Yuxia Geng
Jiaoyan Chen
Yuhang Zeng
Zhuo Chen
Wen Zhang
Jeff Z. Pan
Yuxiang Wang
Xiaoliang Xu
103
2
0
04 Dec 2023
Regularization by Texts for Latent Diffusion Inverse Solvers
Regularization by Texts for Latent Diffusion Inverse Solvers
Jeongsol Kim
Geon Yeong Park
Hyungjin Chung
Jong Chul Ye
AI4CE
109
16
0
27 Nov 2023
Paragraph-to-Image Generation with Information-Enriched Diffusion Model
Paragraph-to-Image Generation with Information-Enriched Diffusion Model
Weijia Wu
Zhuang Li
Yefei He
Mike Zheng Shou
Chunhua Shen
Lele Cheng
Yan Li
Yan Li
Di Zhang
VLM
172
25
0
24 Nov 2023
GQKVA: Efficient Pre-training of Transformers by Grouping Queries, Keys, and Values
GQKVA: Efficient Pre-training of Transformers by Grouping Queries, Keys, and Values
Farnoosh Javadi
Walid Ahmed
Habib Hajimolahoseini
Foozhan Ataiefard
Mohammad Hassanpour
Saina Asani
Austin Wen
Omar Mohamed Awad
Kangling Liu
Yang Liu
VLM
78
8
0
06 Nov 2023
Federated Class-Incremental Learning with Prompting
Federated Class-Incremental Learning with Prompting
Jiale Liu
Yu-Wei Zhan
Chong-Yu Zhang
Xin Luo
Zhen-Duo Chen
Yinwei Wei
CLL
FedML
64
3
0
13 Oct 2023
Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation
Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation
Chen Dun
Mirian Hipolito Garcia
Guoqing Zheng
Ahmed Hassan Awadallah
Anastasios Kyrillidis
Robert Sim
160
6
0
04 Oct 2023
When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
Weiming Zhuang
Chen Chen
Lingjuan Lyu
Chong Chen
Yaochu Jin
Lingjuan Lyu
AIFin
AI4CE
166
95
0
27 Jun 2023
Investigating Prompting Techniques for Zero- and Few-Shot Visual Question Answering
Investigating Prompting Techniques for Zero- and Few-Shot Visual Question Answering
Rabiul Awal
Le Zhang
Aishwarya Agrawal
LRM
98
13
0
16 Jun 2023
Privacy-Preserving Prompt Tuning for Large Language Model Services
Privacy-Preserving Prompt Tuning for Large Language Model Services
Yansong Li
Zhixing Tan
Yang Liu
SILM
VLM
79
68
0
10 May 2023
Constructing Highly Inductive Contexts for Dialogue Safety through
  Controllable Reverse Generation
Constructing Highly Inductive Contexts for Dialogue Safety through Controllable Reverse Generation
Zhexin Zhang
Jiale Cheng
Hao Sun
Jiawen Deng
Fei Mi
Yasheng Wang
Lifeng Shang
Minlie Huang
SILM
127
9
0
04 Dec 2022
ADEPT: A DEbiasing PrompT Framework
ADEPT: A DEbiasing PrompT Framework
Ke Yang
Charles Yu
Yi R. Fung
Manling Li
Heng Ji
84
24
0
10 Nov 2022
PRBoost: Prompt-Based Rule Discovery and Boosting for Interactive
  Weakly-Supervised Learning
PRBoost: Prompt-Based Rule Discovery and Boosting for Interactive Weakly-Supervised Learning
Rongzhi Zhang
Yue Yu
Pranav Shetty
Le Song
Chao Zhang
97
25
0
18 Mar 2022
Hyperdecoders: Instance-specific decoders for multi-task NLP
Hyperdecoders: Instance-specific decoders for multi-task NLP
Hamish Ivison
Matthew E. Peters
AI4CE
89
22
0
15 Mar 2022
Unfreeze with Care: Space-Efficient Fine-Tuning of Semantic Parsing
  Models
Unfreeze with Care: Space-Efficient Fine-Tuning of Semantic Parsing Models
Weiqi Sun
Haidar Khan
Nicolas Guenon des Mesnards
M. Rubino
Konstantine Arkoudas
101
5
0
05 Mar 2022
CLIP-Adapter: Better Vision-Language Models with Feature Adapters
CLIP-Adapter: Better Vision-Language Models with Feature Adapters
Peng Gao
Shijie Geng
Renrui Zhang
Teli Ma
Rongyao Fang
Yongfeng Zhang
Hongsheng Li
Yu Qiao
VLM
CLIP
281
1,040
0
09 Oct 2021
Learning How to Ask: Querying LMs with Mixtures of Soft Prompts
Learning How to Ask: Querying LMs with Mixtures of Soft Prompts
Guanghui Qin
J. Eisner
61
546
0
14 Apr 2021
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Xiang Lisa Li
Percy Liang
223
4,254
0
01 Jan 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
316
350
0
01 Jan 2021
Intrinsic Dimensionality Explains the Effectiveness of Language Model
  Fine-Tuning
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning
Armen Aghajanyan
Luke Zettlemoyer
Sonal Gupta
92
562
1
22 Dec 2020
Few-shot Sequence Learning with Transformers
Few-shot Sequence Learning with Transformers
Lajanugen Logeswaran
Ann Lee
Myle Ott
Honglak Lee
MarcÁurelio Ranzato
Arthur Szlam
ViT
62
12
0
17 Dec 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
743
41,932
0
28 May 2020
MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer
MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer
Jonas Pfeiffer
Ivan Vulić
Iryna Gurevych
Sebastian Ruder
99
626
0
30 Apr 2020
GLU Variants Improve Transformer
GLU Variants Improve Transformer
Noam M. Shazeer
126
996
0
12 Feb 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural
  Language Inference
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
344
1,613
0
21 Jan 2020
How Can We Know What Language Models Know?
How Can We Know What Language Models Know?
Zhengbao Jiang
Frank F. Xu
Jun Araki
Graham Neubig
KELM
130
1,405
0
28 Nov 2019
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language
  Generation, Translation, and Comprehension
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
M. Lewis
Yinhan Liu
Naman Goyal
Marjan Ghazvininejad
Abdel-rahman Mohamed
Omer Levy
Veselin Stoyanov
Luke Zettlemoyer
AIMat
VLM
246
10,819
0
29 Oct 2019
Exploring the Limits of Transfer Learning with a Unified Text-to-Text
  Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
419
20,127
0
23 Oct 2019
MRQA 2019 Shared Task: Evaluating Generalization in Reading
  Comprehension
MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension
Adam Fisch
Alon Talmor
Robin Jia
Minjoon Seo
Eunsol Choi
Danqi Chen
61
306
0
22 Oct 2019
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions
Christopher Clark
Kenton Lee
Ming-Wei Chang
Tom Kwiatkowski
Michael Collins
Kristina Toutanova
217
1,517
0
24 May 2019
A Surprisingly Robust Trick for Winograd Schema Challenge
A Surprisingly Robust Trick for Winograd Schema Challenge
Vid Kocijan
Ana-Maria Cretu
Oana-Maria Camburu
Yordan Yordanov
Thomas Lukasiewicz
77
101
0
15 May 2019
SuperGLUE: A Stickier Benchmark for General-Purpose Language
  Understanding Systems
SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
Alex Jinpeng Wang
Yada Pruksachatkun
Nikita Nangia
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
256
2,312
0
02 May 2019
DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning
  Over Paragraphs
DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs
Dheeru Dua
Yizhong Wang
Pradeep Dasigi
Gabriel Stanovsky
Sameer Singh
Matt Gardner
AIMat
93
950
0
01 Mar 2019
Task2Vec: Task Embedding for Meta-Learning
Task2Vec: Task Embedding for Meta-Learning
Alessandro Achille
Michael Lam
Rahul Tewari
Avinash Ravichandran
Subhransu Maji
Charless C. Fowlkes
Stefano Soatto
Pietro Perona
SSL
75
314
0
10 Feb 2019
Parameter-Efficient Transfer Learning for NLP
Parameter-Efficient Transfer Learning for NLP
N. Houlsby
A. Giurgiu
Stanislaw Jastrzebski
Bruna Morrone
Quentin de Laroussilhe
Andrea Gesmundo
Mona Attariyan
Sylvain Gelly
210
4,451
0
02 Feb 2019
ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading
  Comprehension
ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension
Sheng Zhang
Xiaodong Liu
Jingjing Liu
Jianfeng Gao
Kevin Duh
Benjamin Van Durme
69
314
0
30 Oct 2018
Previous
1234
Next