Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.19486
Cited By
LoPT: Low-Rank Prompt Tuning for Parameter Efficient Language Models
27 June 2024
Shouchang Guo
Sonam Damani
Keng-hao Chang
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"LoPT: Low-Rank Prompt Tuning for Parameter Efficient Language Models"
4 / 4 papers shown
Title
Honey, I Shrunk the Language Model: Impact of Knowledge Distillation Methods on Performance and Explainability
Daniel Hendriks
Philipp Spitzer
Niklas Kühl
G. Satzger
27
1
0
22 Apr 2025
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts
Akari Asai
Mohammadreza Salehi
Matthew E. Peters
Hannaneh Hajishirzi
127
100
0
24 May 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
213
1,657
0
15 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
1