ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.03781
122
0
v1v2 (latest)

Unifying Uniform and Binary-coding Quantization for Accurate Compression of Large Language Models

4 June 2025
Seungcheol Park
Jeongin Bae
Beomseok Kwon
Minjun Kim
Byeongwook Kim
S. Kwon
U. Kang
Dongsoo Lee
    MQ
ArXiv (abs)PDFHTML
Main:9 Pages
8 Figures
Bibliography:3 Pages
16 Tables
Appendix:9 Pages
Abstract

How can we quantize large language models while preserving accuracy? Quantization is essential for deploying large language models (LLMs) efficiently. Binary-coding quantization (BCQ) and uniform quantization (UQ) are promising quantization schemes that have strong expressiveness and optimizability, respectively. However, neither scheme leverages both advantages. In this paper, we propose UniQuanF (Unified Quantization with Flexible Mapping), an accurate quantization method for LLMs. UniQuanF harnesses both strong expressiveness and optimizability by unifying the flexible mapping technique in UQ and non-uniform quantization levels of BCQ. We propose unified initialization, and local and periodic mapping techniques to optimize the parameters in UniQuanF precisely. After optimization, our unification theorem removes computational and memory overhead, allowing us to utilize the superior accuracy of UniQuanF without extra deployment costs induced by the unification. Experimental results demonstrate that UniQuanF outperforms existing UQ and BCQ methods, achieving up to 4.60% higher accuracy on GSM8K benchmark.

View on arXiv
@article{park2025_2506.03781,
  title={ Unifying Uniform and Binary-coding Quantization for Accurate Compression of Large Language Models },
  author={ Seungcheol Park and Jeongin Bae and Beomseok Kwon and Minjun Kim and Byeongwook Kim and Se Jung Kwon and U Kang and Dongsoo Lee },
  journal={arXiv preprint arXiv:2506.03781},
  year={ 2025 }
}
Comments on this paper