Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2304.14999
Cited By
Empirical Analysis of the Strengths and Weaknesses of PEFT Techniques for LLMs
28 April 2023
George Pu
Anirudh Jain
Jihan Yin
Russell Kaplan
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Empirical Analysis of the Strengths and Weaknesses of PEFT Techniques for LLMs"
6 / 6 papers shown
Title
GemmAr: Enhancing LLMs Through Arabic Instruction-Tuning
Hasna Chouikhi
Manel Aloui
Cyrine Ben Hammou
Ghaith Chaabane
Haithem Kchaou
Chehir Dhaouadi
44
0
0
02 Jul 2024
ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts
Samar Khanna
Medhanie Irgau
David B. Lobell
Stefano Ermon
VLM
32
4
0
16 Jun 2024
Mixed Text Recognition with Efficient Parameter Fine-Tuning and Transformer
Da Chang
Yu Li
72
2
0
19 Apr 2024
Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 Kilobytes
Zhen Qin
Daoyuan Chen
Bingchen Qian
Bolin Ding
Yaliang Li
Shuiguang Deng
FedML
45
32
0
11 Dec 2023
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
218
1,663
0
15 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,872
0
18 Apr 2021
1