Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.04902
Cited By
L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Models
7 February 2024
Hyesung Jeon
Yulhwa Kim
Jae-Joon Kim
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Models"
2 / 2 papers shown
Title
Scaling laws for post-training quantized large language models
Zifei Xu
Alexander Lan
W. Yazar
T. Webb
Sayeh Sharify
Xin Wang
MQ
28
0
0
15 Oct 2024
Fine-tuned Language Models are Continual Learners
Thomas Scialom
Tuhin Chakrabarty
Smaranda Muresan
CLL
LRM
145
117
0
24 May 2022
1