Title |
---|
![]() Training and inference of large language models using 8-bit floating
point Sergio P. Perez Yan Zhang James Briggs Charlie Blake Prashanth Krishnamurthy Paul Balanca Carlo Luschi Stephen Barlow Andrew William Fitzgibbon |
![]() Rethinking Channel Dimensions to Isolate Outliers for Low-bit Weight Quantization of Large Language Models Jung Hwan Heo Jeonghoon Kim Beomseok Kwon Byeongwook Kim Se Jung Kwon Dongsoo Lee |
![]() SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight
Compression Tim Dettmers Ruslan Svirschevski Vage Egiazarian Denis Kuznedelev Elias Frantar Saleh Ashkboos Alexander Borzunov Torsten Hoefler Dan Alistarh |