ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1807.10029
  4. Cited By
LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep
  Neural Networks

LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks

26 July 2018
Dongqing Zhang
Jiaolong Yang
Dongqiangzi Ye
G. Hua
    MQ
ArXivPDFHTML

Papers citing "LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks"

50 / 154 papers shown
Title
Radio: Rate-Distortion Optimization for Large Language Model Compression
Radio: Rate-Distortion Optimization for Large Language Model Compression
Sean I. Young
MQ
23
0
0
05 May 2025
Cauchy-Schwarz Regularizers
Cauchy-Schwarz Regularizers
Sueda Taner
Ziyi Wang
Christoph Studer
44
0
0
03 Mar 2025
GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training for LLMs On-Device Fine-tuning
GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training for LLMs On-Device Fine-tuning
Sifan Zhou
Shuo Wang
Zhihang Yuan
Mingjia Shi
Yuzhang Shang
Dawei Yang
ALM
MQ
90
0
0
18 Feb 2025
Fast Matrix Multiplications for Lookup Table-Quantized LLMs
Fast Matrix Multiplications for Lookup Table-Quantized LLMs
Han Guo
William Brandon
Radostin Cholakov
Jonathan Ragan-Kelley
Eric P. Xing
Yoon Kim
MQ
91
12
0
20 Jan 2025
Histogram-Equalized Quantization for logic-gated Residual Neural Networks
Histogram-Equalized Quantization for logic-gated Residual Neural Networks
Van Thien Nguyen
William Guicquero
Gilles Sicard
MQ
41
1
0
10 Jan 2025
Data Generation for Hardware-Friendly Post-Training Quantization
Data Generation for Hardware-Friendly Post-Training Quantization
Lior Dikstein
Ariel Lapid
Arnon Netzer
H. Habi
MQ
154
0
0
29 Oct 2024
Foundations of Large Language Model Compression -- Part 1: Weight
  Quantization
Foundations of Large Language Model Compression -- Part 1: Weight Quantization
Sean I. Young
MQ
45
1
0
03 Sep 2024
BOLD: Boolean Logic Deep Learning
BOLD: Boolean Logic Deep Learning
Van Minh Nguyen
Cristian Ocampo
Aymen Askri
Louis Leconte
Ba-Hien Tran
AI4CE
40
0
0
25 May 2024
Combining Relevance and Magnitude for Resource-Aware DNN Pruning
Combining Relevance and Magnitude for Resource-Aware DNN Pruning
C. Chiasserini
F. Malandrino
Nuria Molner
Zhiqiang Zhao
35
0
0
21 May 2024
AdaQAT: Adaptive Bit-Width Quantization-Aware Training
AdaQAT: Adaptive Bit-Width Quantization-Aware Training
Cédric Gernigon
Silviu-Ioan Filip
Olivier Sentieys
Clément Coggiola
Mickael Bruno
23
2
0
22 Apr 2024
CBQ: Cross-Block Quantization for Large Language Models
CBQ: Cross-Block Quantization for Large Language Models
Xin Ding
Xiaoyu Liu
Zhijun Tu
Yun-feng Zhang
Wei Li
...
Hanting Chen
Yehui Tang
Zhiwei Xiong
Baoqun Yin
Yunhe Wang
MQ
36
13
0
13 Dec 2023
PLUM: Improving Inference Efficiency By Leveraging Repetition-Sparsity Trade-Off
PLUM: Improving Inference Efficiency By Leveraging Repetition-Sparsity Trade-Off
Sachit Kuhar
Yash Jain
Alexey Tumanov
MQ
54
0
0
04 Dec 2023
Quantization-aware Neural Architectural Search for Intrusion Detection
Quantization-aware Neural Architectural Search for Intrusion Detection
R. Acharya
Laurens Le Jeune
N. Mentens
F. Ganji
Domenic Forte
8
0
0
07 Nov 2023
FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization
  Search
FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search
Jordan Dotzel
Gang Wu
Andrew Li
M. Umar
Yun Ni
...
Liqun Cheng
Martin G. Dixon
N. Jouppi
Quoc V. Le
Sheng Li
MQ
30
3
0
07 Aug 2023
Quantized Feature Distillation for Network Quantization
Quantized Feature Distillation for Network Quantization
Kevin Zhu
Yin He
Jianxin Wu
MQ
29
9
0
20 Jul 2023
AutoQNN: An End-to-End Framework for Automatically Quantizing Neural
  Networks
AutoQNN: An End-to-End Framework for Automatically Quantizing Neural Networks
Cheng Gong
Ye Lu
Surong Dai
Deng Qian
Chenkun Du
Tao Li
MQ
29
0
0
07 Apr 2023
Optimizing data-flow in Binary Neural Networks
Optimizing data-flow in Binary Neural Networks
Lorenzo Vorabbi
Davide Maltoni
Stefano Santi
MQ
22
5
0
03 Apr 2023
Binarizing Sparse Convolutional Networks for Efficient Point Cloud
  Analysis
Binarizing Sparse Convolutional Networks for Efficient Point Cloud Analysis
Xiuwei Xu
Ziwei Wang
Jie Zhou
Jiwen Lu
3DPC
MQ
32
6
0
27 Mar 2023
MetaGrad: Adaptive Gradient Quantization with Hypernetworks
MetaGrad: Adaptive Gradient Quantization with Hypernetworks
Kaixin Xu
Alina Hui Xiu Lee
Ziyuan Zhao
Zhe Wang
Min-man Wu
Weisi Lin
MQ
20
1
0
04 Mar 2023
Oscillation-free Quantization for Low-bit Vision Transformers
Oscillation-free Quantization for Low-bit Vision Transformers
Shi Liu
Zechun Liu
Kwang-Ting Cheng
MQ
23
34
0
04 Feb 2023
An Optical XNOR-Bitcount Based Accelerator for Efficient Inference of
  Binary Neural Networks
An Optical XNOR-Bitcount Based Accelerator for Efficient Inference of Binary Neural Networks
Sairam Sri Vatsavai
Venkata Sai Praneeth Karempudi
Ishan G. Thakkar
MQ
28
4
0
03 Feb 2023
Efficient and Effective Methods for Mixed Precision Neural Network
  Quantization for Faster, Energy-efficient Inference
Efficient and Effective Methods for Mixed Precision Neural Network Quantization for Faster, Energy-efficient Inference
Deepika Bablani
J. McKinstry
S. K. Esser
R. Appuswamy
D. Modha
MQ
23
4
0
30 Jan 2023
HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural
  Networks
HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Jinqi Xiao
Chengming Zhang
Yu Gong
Miao Yin
Yang Sui
Lizhi Xiang
Dingwen Tao
Bo Yuan
29
19
0
20 Jan 2023
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of
  Quantized CNNs
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs
A. M. Ribeiro-dos-Santos
João Dinis Ferreira
O. Mutlu
G. Falcão
MQ
21
1
0
15 Jan 2023
Hyperspherical Quantization: Toward Smaller and More Accurate Models
Hyperspherical Quantization: Toward Smaller and More Accurate Models
Dan Liu
X. Chen
Chen-li Ma
Xue Liu
MQ
30
3
0
24 Dec 2022
CSMPQ:Class Separability Based Mixed-Precision Quantization
CSMPQ:Class Separability Based Mixed-Precision Quantization
Ming-Yu Wang
Taisong Jin
Miaohui Zhang
Zhengtao Yu
MQ
25
0
0
20 Dec 2022
Redistribution of Weights and Activations for AdderNet Quantization
Redistribution of Weights and Activations for AdderNet Quantization
Ying Nie
Kai Han
Haikang Diao
Chuanjian Liu
Enhua Wu
Yunhe Wang
MQ
55
6
0
20 Dec 2022
PD-Quant: Post-Training Quantization based on Prediction Difference
  Metric
PD-Quant: Post-Training Quantization based on Prediction Difference Metric
Jiawei Liu
Lin Niu
Zhihang Yuan
Dawei Yang
Xinggang Wang
Wenyu Liu
MQ
96
68
0
14 Dec 2022
Vertical Layering of Quantized Neural Networks for Heterogeneous
  Inference
Vertical Layering of Quantized Neural Networks for Heterogeneous Inference
Hai Wu
Ruifei He
Hao Hao Tan
Xiaojuan Qi
Kaibin Huang
MQ
24
2
0
10 Dec 2022
QEBVerif: Quantization Error Bound Verification of Neural Networks
QEBVerif: Quantization Error Bound Verification of Neural Networks
Yedi Zhang
Fu Song
Jun Sun
MQ
26
11
0
06 Dec 2022
CSQ: Growing Mixed-Precision Quantization Scheme with Bi-level
  Continuous Sparsification
CSQ: Growing Mixed-Precision Quantization Scheme with Bi-level Continuous Sparsification
Lirui Xiao
Huanrui Yang
Zhen Dong
Kurt Keutzer
Li Du
Shanghang Zhang
MQ
27
10
0
06 Dec 2022
Boosted Dynamic Neural Networks
Boosted Dynamic Neural Networks
Haichao Yu
Haoxiang Li
G. Hua
Gao Huang
Humphrey Shi
35
7
0
30 Nov 2022
NoisyQuant: Noisy Bias-Enhanced Post-Training Activation Quantization
  for Vision Transformers
NoisyQuant: Noisy Bias-Enhanced Post-Training Activation Quantization for Vision Transformers
Yijiang Liu
Huanrui Yang
Zhen Dong
Kurt Keutzer
Li Du
Shanghang Zhang
MQ
31
46
0
29 Nov 2022
Signed Binary Weight Networks
Sachit Kuhar
Alexey Tumanov
Judy Hoffman
MQ
21
1
0
25 Nov 2022
Join the High Accuracy Club on ImageNet with A Binary Neural Network
  Ticket
Join the High Accuracy Club on ImageNet with A Binary Neural Network Ticket
Nianhui Guo
Joseph Bethge
Christoph Meinel
Haojin Yang
MQ
34
19
0
23 Nov 2022
BiViT: Extremely Compressed Binary Vision Transformer
BiViT: Extremely Compressed Binary Vision Transformer
Yefei He
Zhenyu Lou
Luoming Zhang
Jing Liu
Weijia Wu
Hong Zhou
Bohan Zhuang
ViT
MQ
20
28
0
14 Nov 2022
Efficiently Scaling Transformer Inference
Efficiently Scaling Transformer Inference
Reiner Pope
Sholto Douglas
Aakanksha Chowdhery
Jacob Devlin
James Bradbury
Anselm Levskaya
Jonathan Heek
Kefan Xiao
Shivani Agrawal
J. Dean
34
295
0
09 Nov 2022
AskewSGD : An Annealed interval-constrained Optimisation method to train
  Quantized Neural Networks
AskewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural Networks
Louis Leconte
S. Schechtman
Eric Moulines
29
4
0
07 Nov 2022
Weight Fixing Networks
Weight Fixing Networks
Christopher Subia-Waud
S. Dasmahapatra
MQ
16
2
0
24 Oct 2022
Fast and Low-Memory Deep Neural Networks Using Binary Matrix
  Factorization
Fast and Low-Memory Deep Neural Networks Using Binary Matrix Factorization
Alireza Bordbar
M. Kahaei
MQ
30
0
0
24 Oct 2022
Convolutional Neural Networks Quantization with Attention
Convolutional Neural Networks Quantization with Attention
Binyi Wu
Bernd Waschneck
Christian Mayr
MQ
24
1
0
30 Sep 2022
Efficient Quantized Sparse Matrix Operations on Tensor Cores
Efficient Quantized Sparse Matrix Operations on Tensor Cores
Shigang Li
Kazuki Osawa
Torsten Hoefler
82
31
0
14 Sep 2022
Analysis of Quantization on MLP-based Vision Models
Analysis of Quantization on MLP-based Vision Models
Lingran Zhao
Zhen Dong
Kurt Keutzer
MQ
29
7
0
14 Sep 2022
PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for
  Vision Transformers
PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for Vision Transformers
Zhikai Li
Mengjuan Chen
Junrui Xiao
Qingyi Gu
ViT
MQ
43
33
0
13 Sep 2022
FP8 Formats for Deep Learning
FP8 Formats for Deep Learning
Paulius Micikevicius
Dusan Stosic
N. Burgess
Marius Cornea
Pradeep Dubey
...
Naveen Mellempudi
S. Oberman
M. Shoeybi
Michael Siu
Hao Wu
BDL
VLM
MQ
74
122
0
12 Sep 2022
ANT: Exploiting Adaptive Numerical Data Type for Low-bit Deep Neural
  Network Quantization
ANT: Exploiting Adaptive Numerical Data Type for Low-bit Deep Neural Network Quantization
Cong Guo
Chen Zhang
Jingwen Leng
Zihan Liu
Fan Yang
Yun-Bo Liu
Minyi Guo
Yuhao Zhu
MQ
20
55
0
30 Aug 2022
Mixed-Precision Neural Networks: A Survey
Mixed-Precision Neural Networks: A Survey
M. Rakka
M. Fouda
Pramod P. Khargonekar
Fadi J. Kurdahi
MQ
25
11
0
11 Aug 2022
I-ViT: Integer-only Quantization for Efficient Vision Transformer
  Inference
I-ViT: Integer-only Quantization for Efficient Vision Transformer Inference
Zhikai Li
Qingyi Gu
MQ
57
95
0
04 Jul 2022
QuantFace: Towards Lightweight Face Recognition by Synthetic Data
  Low-bit Quantization
QuantFace: Towards Lightweight Face Recognition by Synthetic Data Low-bit Quantization
Fadi Boutros
Naser Damer
Arjan Kuijper
CVBM
MQ
22
37
0
21 Jun 2022
LilNetX: Lightweight Networks with EXtreme Model Compression and
  Structured Sparsification
LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification
Sharath Girish
Kamal Gupta
Saurabh Singh
Abhinav Shrivastava
36
11
0
06 Apr 2022
1234
Next