ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.20202
  4. Cited By
One QuantLLM for ALL: Fine-tuning Quantized LLMs Once for Efficient
  Deployments

One QuantLLM for ALL: Fine-tuning Quantized LLMs Once for Efficient Deployments

30 May 2024
Ke Yi
Yuhui Xu
Heng Chang
Chen Tang
Yuan Meng
Tong Zhang
Jia Li
    MQ
ArXivPDFHTML

Papers citing "One QuantLLM for ALL: Fine-tuning Quantized LLMs Once for Efficient Deployments"

2 / 2 papers shown
Title
Rotated Runtime Smooth: Training-Free Activation Smoother for accurate
  INT4 inference
Rotated Runtime Smooth: Training-Free Activation Smoother for accurate INT4 inference
Ke Yi
Zengke Liu
Jianwei Zhang
Chengyuan Li
Tong Zhang
Junyang Lin
Jingren Zhou
MQ
48
1
0
30 Sep 2024
Evaluating the Generalization Ability of Quantized LLMs: Benchmark,
  Analysis, and Toolbox
Evaluating the Generalization Ability of Quantized LLMs: Benchmark, Analysis, and Toolbox
Yijun Liu
Yuan Meng
Fang Wu
Shenhao Peng
Hang Yao
Chaoyu Guan
Chen Tang
Xinzhu Ma
Zhi Wang
Wenwu Zhu
MQ
62
7
0
15 Jun 2024
1