ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20839
13
0
v1v2 (latest)

FireQ: Fast INT4-FP8 Kernel and RoPE-aware Quantization for LLM Inference Acceleration

27 May 2025
Daehyeon Baek
Jieun Choi
Jimyoung Son
Kyungmin Bin
Seungbeom Choi
Kihyo Moon
Minsung Jang
Hyojung Lee
    MQ
ArXiv (abs)PDFHTML
Main:9 Pages
17 Figures
Bibliography:2 Pages
4 Tables
Appendix:7 Pages
Abstract

As large language models become increasingly prevalent, memory bandwidth constraints significantly limit inference throughput, motivating post-training quantization (PTQ). In this paper, we propose FireQ, a co-designed PTQ framework and an INT4-FP8 matrix multiplication kernel that accelerates LLM inference across all linear layers. Specifically, FireQ quantizes linear layer weights and key-values to INT4, and activations and queries to FP8, significantly enhancing throughput. Additionally, we introduce a three-stage pipelining for the prefill phase, which modifies the FlashAttention-3 kernel, effectively reducing time-to-first-token in the prefill phase. To minimize accuracy loss from quantization, we develop novel outlier smoothing techniques tailored separately for linear and attention layers. In linear layers, we explicitly use per-tensor scaling to prevent underflow caused by the FP8 quantization scaling factor of INT4 quantization, and channel-wise scaling to compensate for coarse granularity of INT4. In attention layers, we address quantization challenges posed by rotary positional embeddings (RoPE) by combining pre-RoPE and post-RoPE scaling strategies. FireQ significantly outperforms state-of-the-art methods, achieving 1.68x faster inference in feed-forward network layers on Llama2-7B and 1.26x faster prefill phase performance on Llama3-8B compared to QServe, with negligible accuracy loss.

View on arXiv
@article{baek2025_2505.20839,
  title={ FireQ: Fast INT4-FP8 Kernel and RoPE-aware Quantization for LLM Inference Acceleration },
  author={ Daehyeon Baek and Jieun Choi and Jimyoung Son and Kyungmin Bin and Seungbeom Choi and Kihyo Moon and Minsung Jang and Hyojung Lee },
  journal={arXiv preprint arXiv:2505.20839},
  year={ 2025 }
}
Comments on this paper