ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.04902
  4. Cited By
L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large
  Language Models

L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Models

7 February 2024
Hyesung Jeon
Yulhwa Kim
Jae-Joon Kim
    MQ
ArXivPDFHTML

Papers citing "L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Models"

2 / 2 papers shown
Title
Scaling laws for post-training quantized large language models
Scaling laws for post-training quantized large language models
Zifei Xu
Alexander Lan
W. Yazar
T. Webb
Sayeh Sharify
Xin Wang
MQ
28
0
0
15 Oct 2024
Fine-tuned Language Models are Continual Learners
Fine-tuned Language Models are Continual Learners
Thomas Scialom
Tuhin Chakrabarty
Smaranda Muresan
CLL
LRM
145
117
0
24 May 2022
1