Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1711.05852
Cited By
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
15 November 2017
Asit K. Mishra
Debbie Marr
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy"
22 / 72 papers shown
Title
Towards Efficient Training for Neural Network Quantization
Qing Jin
Linjie Yang
Zhenyu A. Liao
MQ
19
42
0
21 Dec 2019
Adaptive Loss-aware Quantization for Multi-bit Networks
Zhongnan Qu
Zimu Zhou
Yun Cheng
Lothar Thiele
MQ
36
53
0
18 Dec 2019
QKD: Quantization-aware Knowledge Distillation
Jangho Kim
Yash Bhalgat
Jinwon Lee
Chirag I. Patel
Nojun Kwak
MQ
21
63
0
28 Nov 2019
Iteratively Training Look-Up Tables for Network Quantization
Fabien Cardinaux
Stefan Uhlich
K. Yoshiyama
Javier Alonso García
Lukas Mauch
Stephen Tiedemann
Thomas Kemp
Akira Nakamura
MQ
27
16
0
12 Nov 2019
On the Efficacy of Knowledge Distillation
Ligang He
Rui Mao
17
598
0
03 Oct 2019
Structured Binary Neural Networks for Image Recognition
Bohan Zhuang
Chunhua Shen
Mingkui Tan
Peng Chen
Lingqiao Liu
Ian Reid
MQ
22
17
0
22 Sep 2019
Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks
Ruihao Gong
Xianglong Liu
Shenghu Jiang
Tian-Hao Li
Peng Hu
Jiazhen Lin
F. Yu
Junjie Yan
MQ
32
446
0
14 Aug 2019
Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations
Bohan Zhuang
Jing Liu
Mingkui Tan
Lingqiao Liu
Ian Reid
Chunhua Shen
MQ
29
44
0
10 Aug 2019
GDRQ: Group-based Distribution Reshaping for Quantization
Haibao Yu
Tuopu Wen
Guangliang Cheng
Jiankai Sun
Qi Han
Jianping Shi
MQ
33
3
0
05 Aug 2019
And the Bit Goes Down: Revisiting the Quantization of Neural Networks
Pierre Stock
Armand Joulin
Rémi Gribonval
Benjamin Graham
Hervé Jégou
MQ
34
149
0
12 Jul 2019
Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks
Ahmed T. Elthakeb
Prannoy Pilligundla
Alex Cloninger
H. Esmaeilzadeh
MQ
26
8
0
14 Jun 2019
Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization
K. Helwegen
James Widdicombe
Lukas Geiger
Zechun Liu
K. Cheng
Roeland Nusselder
MQ
27
110
0
05 Jun 2019
Training Quantized Neural Networks with a Full-precision Auxiliary Module
Bohan Zhuang
Lingqiao Liu
Mingkui Tan
Chunhua Shen
Ian Reid
MQ
32
62
0
27 Mar 2019
Structured Binary Neural Networks for Accurate Image Classification and Semantic Segmentation
Bohan Zhuang
Chunhua Shen
Mingkui Tan
Lingqiao Liu
Ian Reid
MQ
27
152
0
22 Nov 2018
Relaxed Quantization for Discretized Neural Networks
Christos Louizos
M. Reisser
Tijmen Blankevoort
E. Gavves
Max Welling
MQ
30
131
0
03 Oct 2018
Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network using Truncated Gaussian Approximation
Zhezhi He
Deliang Fan
MQ
16
66
0
02 Oct 2018
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Jungwook Choi
P. Chuang
Zhuo Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
K. Gopalakrishnan
MQ
13
75
0
17 Jul 2018
Quantizing deep convolutional networks for efficient inference: A whitepaper
Raghuraman Krishnamoorthi
MQ
48
993
0
21 Jun 2018
Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking
Haichuan Yang
Yuhao Zhu
Ji Liu
CVBM
14
36
0
12 Jun 2018
UNIQ: Uniform Noise Injection for Non-Uniform Quantization of Neural Networks
Chaim Baskin
Eli Schwartz
Evgenii Zheltonozhskii
Natan Liss
Raja Giryes
A. Bronstein
A. Mendelson
MQ
22
45
0
29 Apr 2018
Value-aware Quantization for Training and Inference of Neural Networks
Eunhyeok Park
S. Yoo
Peter Vajda
MQ
14
158
0
20 Apr 2018
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
337
1,049
0
10 Feb 2017
Previous
1
2