Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2403.06408
Cited By
What Makes Quantization for Large Language Models Hard? An Empirical Study from the Lens of Perturbation
11 March 2024
Zhuocheng Gong
Jiahao Liu
Jingang Wang
Xunliang Cai
Dongyan Zhao
Rui Yan
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"What Makes Quantization for Large Language Models Hard? An Empirical Study from the Lens of Perturbation"
7 / 7 papers shown
Title
Efficient Fine-Tuning of Quantized Models via Adaptive Rank and Bitwidth
Changhai Zhou
Yuhua Zhou
Qian Qiao
Weizhong Zhang
Cheng Jin
MQ
27
0
0
02 May 2025
How Does Quantization Affect Multilingual LLMs?
Kelly Marchisio
Saurabh Dash
Hongyu Chen
Dennis Aumiller
A. Ustun
Sara Hooker
Sebastian Ruder
MQ
52
8
0
03 Jul 2024
Pruning via Merging: Compressing LLMs via Manifold Alignment Based Layer Merging
Deyuan Liu
Zhanyue Qin
Hairu Wang
Zhao Yang
Zecheng Wang
...
Zhao Lv
Zhiying Tu
Dianhui Chu
Bo Li
Dianbo Sui
22
2
0
24 Jun 2024
Perturbation-Restrained Sequential Model Editing
Junjie Ma
Hong Wang
Haoyang Xu
Zhen-Hua Ling
Jia-Chen Gu
KELM
59
8
0
27 May 2024
ZeroQuant-V2: Exploring Post-training Quantization in LLMs from Comprehensive Study to Low Rank Compensation
Z. Yao
Xiaoxia Wu
Cheng-rong Li
Stephen Youn
Yuxiong He
MQ
63
57
0
15 Mar 2023
I-BERT: Integer-only BERT Quantization
Sehoon Kim
A. Gholami
Z. Yao
Michael W. Mahoney
Kurt Keutzer
MQ
105
341
0
05 Jan 2021
Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT
Sheng Shen
Zhen Dong
Jiayu Ye
Linjian Ma
Z. Yao
A. Gholami
Michael W. Mahoney
Kurt Keutzer
MQ
233
576
0
12 Sep 2019
1