ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.14047
  4. Cited By
An empirical study of LLaMA3 quantization: from LLMs to MLLMs

An empirical study of LLaMA3 quantization: from LLMs to MLLMs

22 April 2024
Wei Huang
Xingyu Zheng
Xudong Ma
Haotong Qin
Chengtao Lv
Hong Chen
Jie Luo
Xiaojuan Qi
Xianglong Liu
Michele Magno
    MQ
ArXivPDFHTML

Papers citing "An empirical study of LLaMA3 quantization: from LLMs to MLLMs"

29 / 29 papers shown
Title
Resource-Efficient Language Models: Quantization for Fast and Accessible Inference
Resource-Efficient Language Models: Quantization for Fast and Accessible Inference
Tollef Emil Jørgensen
MQ
49
0
0
13 May 2025
An Empirical Study of Qwen3 Quantization
An Empirical Study of Qwen3 Quantization
Xingyu Zheng
Yuye Li
Haoran Chu
Yue Feng
Xudong Ma
Jie Luo
Jinyang Guo
Haotong Qin
Michele Magno
Xianglong Liu
MQ
29
0
0
04 May 2025
NoWag: A Unified Framework for Shape Preserving Compression of Large Language Models
NoWag: A Unified Framework for Shape Preserving Compression of Large Language Models
Lawrence Liu
Inesh Chakrabarti
Yixiao Li
Mengdi Wang
Tuo Zhao
Lin F. Yang
MQ
33
0
0
20 Apr 2025
Sustainable LLM Inference for Edge AI: Evaluating Quantized LLMs for Energy Efficiency, Output Accuracy, and Inference Latency
Sustainable LLM Inference for Edge AI: Evaluating Quantized LLMs for Energy Efficiency, Output Accuracy, and Inference Latency
E. J. Husom
Arda Goknil
Merve Astekin
Lwin Khin Shar
Andre Kåsen
S. Sen
Benedikt Andreas Mithassel
Ahmet Soylu
MQ
38
0
0
04 Apr 2025
Mixture-of-Attack-Experts with Class Regularization for Unified Physical-Digital Face Attack Detection
Mixture-of-Attack-Experts with Class Regularization for Unified Physical-Digital Face Attack Detection
Shunxin Chen
A. Liu
Junze Zheng
Jun Wan
Kailai Peng
Sergio Escalera
Zhen Lei
AAML
61
0
0
01 Apr 2025
AdaMHF: Adaptive Multimodal Hierarchical Fusion for Survival Prediction
AdaMHF: Adaptive Multimodal Hierarchical Fusion for Survival Prediction
S. Zhang
Xun Lin
Rongxiang Zhang
Yu Bai
Yong Xu
Tao Tan
Xunbin Zheng
Zitong Yu
39
0
0
27 Mar 2025
LoRASculpt: Sculpting LoRA for Harmonizing General and Specialized Knowledge in Multimodal Large Language Models
LoRASculpt: Sculpting LoRA for Harmonizing General and Specialized Knowledge in Multimodal Large Language Models
Jian Liang
Wenke Huang
Guancheng Wan
Qu Yang
Mang Ye
MoMe
CLL
AI4CE
60
1
0
21 Mar 2025
ClusComp: A Simple Paradigm for Model Compression and Efficient Finetuning
ClusComp: A Simple Paradigm for Model Compression and Efficient Finetuning
Baohao Liao
Christian Herold
Seyyed Hadi Hashemi
Stefan Vasilev
Shahram Khadivi
Christof Monz
MQ
44
0
0
17 Mar 2025
GPT's Devastated and LLaMA's Content: Emotion Representation Alignment in LLMs for Keyword-based Generation
GPT's Devastated and LLaMA's Content: Emotion Representation Alignment in LLMs for Keyword-based Generation
Shadab Choudhury
Asha Kumar
Lara J. Martin
50
0
0
14 Mar 2025
FlexInfer: Breaking Memory Constraint via Flexible and Efficient Offloading for On-Device LLM Inference
Hongchao Du
Shangyu Wu
Arina Kharlamova
Nan Guan
Chun Jason Xue
51
1
0
04 Mar 2025
Evaluating Large Language Models for Public Health Classification and Extraction Tasks
Evaluating Large Language Models for Public Health Classification and Extraction Tasks
Joshua Harris
Timothy Laurence
Leo Loman
Fan Grayson
Toby Nonnenmacher
...
Hamish Mohammed
Thomas Finnie
Luke Hounsome
Michael Borowitz
Steven Riley
LM&MA
AI4MH
83
5
0
20 Feb 2025
LLMs Can Teach Themselves to Better Predict the Future
LLMs Can Teach Themselves to Better Predict the Future
Benjamin Turtel
Danny Franklin
Philipp Schoenegger
LRM
57
0
0
07 Feb 2025
Pushing the Envelope of Low-Bit LLM via Dynamic Error Compensation
Pushing the Envelope of Low-Bit LLM via Dynamic Error Compensation
Y. Park
Jake Hyun
Hojoon Kim
Jae W. Lee
MQ
41
0
0
31 Dec 2024
TesseraQ: Ultra Low-Bit LLM Post-Training Quantization with Block
  Reconstruction
TesseraQ: Ultra Low-Bit LLM Post-Training Quantization with Block Reconstruction
Yuhang Li
Priyadarshini Panda
MQ
26
1
0
24 Oct 2024
KatzBot: Revolutionizing Academic Chatbot for Enhanced Communication
KatzBot: Revolutionizing Academic Chatbot for Enhanced Communication
Sahil Kumar
Deepa Paikar
Kiran Sai Vutukuri
Haider Ali
Shashidhar Reddy Ainala
Aditya Murli Krishnan
Youshan Zhang
21
1
0
21 Oct 2024
MoDeGPT: Modular Decomposition for Large Language Model Compression
MoDeGPT: Modular Decomposition for Large Language Model Compression
Chi-Heng Lin
Shangqian Gao
James Seale Smith
Abhishek Patel
Shikhar Tuli
Yilin Shen
Hongxia Jin
Yen-Chang Hsu
71
6
0
19 Aug 2024
QVD: Post-training Quantization for Video Diffusion Models
QVD: Post-training Quantization for Video Diffusion Models
Shilong Tian
Hong Chen
Chengtao Lv
Yu Liu
Jinyang Guo
Xianglong Liu
Shengxi Li
Hao Yang
Tao Xie
VGen
MQ
46
2
0
16 Jul 2024
LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
Jung Hyun Lee
Jeonghoon Kim
J. Yang
S. Kwon
Eunho Yang
Kang Min Yoo
Dongsoo Lee
MQ
36
2
0
16 Jul 2024
EfficientQAT: Efficient Quantization-Aware Training for Large Language Models
EfficientQAT: Efficient Quantization-Aware Training for Large Language Models
Mengzhao Chen
Wenqi Shao
Peng Xu
Jiahao Wang
Peng Gao
Kaipeng Zhang
Yu Qiao
MQ
38
24
0
10 Jul 2024
How Does Quantization Affect Multilingual LLMs?
How Does Quantization Affect Multilingual LLMs?
Kelly Marchisio
Saurabh Dash
Hongyu Chen
Dennis Aumiller
A. Ustun
Sara Hooker
Sebastian Ruder
MQ
49
6
0
03 Jul 2024
Evaluating the Generalization Ability of Quantized LLMs: Benchmark,
  Analysis, and Toolbox
Evaluating the Generalization Ability of Quantized LLMs: Benchmark, Analysis, and Toolbox
Yijun Liu
Yuan Meng
Fang Wu
Shenhao Peng
Hang Yao
Chaoyu Guan
Chen Tang
Xinzhu Ma
Zhi Wang
Wenwu Zhu
MQ
55
7
0
15 Jun 2024
TernaryLLM: Ternarized Large Language Model
TernaryLLM: Ternarized Large Language Model
Tianqi Chen
Zhe Li
Weixiang Xu
Zeyu Zhu
Dong Li
Lu Tian
E. Barsoum
Peisong Wang
Jian Cheng
36
7
0
11 Jun 2024
PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM
  Compression
PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression
Vladimir Malinovskii
Denis Mazur
Ivan Ilin
Denis Kuznedelev
Konstantin Burlachenko
Kai Yi
Dan Alistarh
Peter Richtárik
MQ
29
19
0
23 May 2024
SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large
  Language Models
SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models
Wei Huang
Haotong Qin
Yangdong Liu
Yawei Li
Xianglong Liu
Luca Benini
Michele Magno
Xiaojuan Qi
MQ
67
15
0
23 May 2024
Integer Scale: A Free Lunch for Faster Fine-grained Quantization of LLMs
Integer Scale: A Free Lunch for Faster Fine-grained Quantization of LLMs
Qingyuan Li
Ran Meng
Yiduo Li
Bo Zhang
Yifan Lu
Yerui Sun
Lin Ma
Yuchen Xie
MQ
33
0
0
23 May 2024
Large Language Models Reveal Information Operation Goals, Tactics, and
  Narrative Frames
Large Language Models Reveal Information Operation Goals, Tactics, and Narrative Frames
Keith Burghardt
Kai Chen
Kristina Lerman
40
0
0
06 May 2024
ApiQ: Finetuning of 2-Bit Quantized Large Language Model
ApiQ: Finetuning of 2-Bit Quantized Large Language Model
Baohao Liao
Christian Herold
Shahram Khadivi
Christof Monz
CLL
MQ
42
12
0
07 Feb 2024
BiLLM: Pushing the Limit of Post-Training Quantization for LLMs
BiLLM: Pushing the Limit of Post-Training Quantization for LLMs
Wei Huang
Yangdong Liu
Haotong Qin
Ying Li
Shiming Zhang
Xianglong Liu
Michele Magno
Xiaojuan Qi
MQ
77
69
0
06 Feb 2024
Neural Network Compression using Binarization and Few Full-Precision
  Weights
Neural Network Compression using Binarization and Few Full-Precision Weights
F. M. Nardini
Cosimo Rulli
Salvatore Trani
Rossano Venturini
MQ
19
1
0
15 Jun 2023
1