ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.02631
  4. Cited By
ParetoQ: Scaling Laws in Extremely Low-bit LLM Quantization

ParetoQ: Scaling Laws in Extremely Low-bit LLM Quantization

4 February 2025
Zechun Liu
Changsheng Zhao
Hanxian Huang
Sijia Chen
Jing Zhang
Jiawei Zhao
Scott Roy
Lisa Jin
Yunyang Xiong
Yangyang Shi
Lin Xiao
Yuandong Tian
Bilge Soran
Raghuraman Krishnamoorthi
Tijmen Blankevoort
Vikas Chandra
    MQ
ArXivPDFHTML

Papers citing "ParetoQ: Scaling Laws in Extremely Low-bit LLM Quantization"

1 / 1 papers shown
Title
Resource-Efficient Language Models: Quantization for Fast and Accessible Inference
Resource-Efficient Language Models: Quantization for Fast and Accessible Inference
Tollef Emil Jørgensen
MQ
51
0
0
13 May 2025
1