Huff-LLM: End-to-End Lossless Compression for Efficient LLM Inference
- MQ

As they become more capable, large language models (LLMs) have continued to rapidly increase in size. This has exacerbated the difficulty in running state of the art LLMs on small, edge devices. Standard techniques advocate solving this problem through lossy compression techniques such as quantization or pruning. However, such compression techniques are lossy, and have been shown to change model behavior in unpredictable manners. We propose Huff-LLM, an \emph{end-to-end, lossless} model compression method that lets users store LLM weights in compressed format \emph{everywhere} -- cloud, disk, main memory, and even in on-chip memory/buffers. This allows us to not only load larger models in main memory, but also reduces bandwidth required to load weights on chip, and makes more efficient use of on-chip weight buffers. In addition to the memory savings achieved via compression, we also show latency and energy efficiency improvements when performing inference with the compressed model.
View on arXiv@article{yubeaton2025_2502.00922, title={ Huff-LLM: End-to-End Lossless Compression for Efficient LLM Inference }, author={ Patrick Yubeaton and Tareq Mahmoud and Shehab Naga and Pooria Taheri and Tianhua Xia and Arun George and Yasmein Khalil and Sai Qian Zhang and Siddharth Joshi and Chinmay Hegde and Siddharth Garg }, journal={arXiv preprint arXiv:2502.00922}, year={ 2025 } }