ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.11106
  4. Cited By
Self-Supervised Quantization-Aware Knowledge Distillation

Self-Supervised Quantization-Aware Knowledge Distillation

17 March 2024
Kaiqi Zhao
Ming Zhao
    MQ
ArXivPDFHTML

Papers citing "Self-Supervised Quantization-Aware Knowledge Distillation"

23 / 23 papers shown
Title
Collaborative Multi-Teacher Knowledge Distillation for Learning Low
  Bit-width Deep Neural Networks
Collaborative Multi-Teacher Knowledge Distillation for Learning Low Bit-width Deep Neural Networks
Cuong Pham
Tuan Hoang
Thanh-Toan Do
FedML
MQ
55
15
0
27 Oct 2022
MQBench: Towards Reproducible and Deployable Model Quantization
  Benchmark
MQBench: Towards Reproducible and Deployable Model Quantization Benchmark
Yuhang Li
Mingzhu Shen
Jian Ma
Yan Ren
Mingxin Zhao
Qi Zhang
Ruihao Gong
F. Yu
Junjie Yan
MQ
52
50
0
05 Nov 2021
Network Quantization with Element-wise Gradient Scaling
Network Quantization with Element-wise Gradient Scaling
Junghyup Lee
Dohyung Kim
Bumsub Ham
MQ
39
120
0
02 Apr 2021
BRECQ: Pushing the Limit of Post-Training Quantization by Block
  Reconstruction
BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction
Yuhang Li
Ruihao Gong
Xu Tan
Yang Yang
Peng Hu
Qi Zhang
F. Yu
Wei Wang
Shi Gu
MQ
109
435
0
10 Feb 2021
Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized
  Deep Neural Networks
Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
Yoonho Boo
Sungho Shin
Jungwook Choi
Wonyong Sung
MQ
51
30
0
30 Sep 2020
QKD: Quantization-aware Knowledge Distillation
QKD: Quantization-aware Knowledge Distillation
Jangho Kim
Yash Bhalgat
Jinwon Lee
Chirag I. Patel
Nojun Kwak
MQ
80
64
0
28 Nov 2019
Loss Aware Post-training Quantization
Loss Aware Post-training Quantization
Yury Nahshan
Brian Chmiel
Chaim Baskin
Evgenii Zheltonozhskii
Ron Banner
A. Bronstein
A. Mendelson
MQ
69
166
0
17 Nov 2019
Additive Powers-of-Two Quantization: An Efficient Non-uniform
  Discretization for Neural Networks
Additive Powers-of-Two Quantization: An Efficient Non-uniform Discretization for Neural Networks
Yuhang Li
Xin Dong
Wei Wang
MQ
62
258
0
28 Sep 2019
Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit
  Neural Networks
Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks
Ruihao Gong
Xianglong Liu
Shenghu Jiang
Tian-Hao Li
Peng Hu
Jiazhen Lin
F. Yu
Junjie Yan
MQ
60
457
0
14 Aug 2019
Similarity-Preserving Knowledge Distillation
Similarity-Preserving Knowledge Distillation
Frederick Tung
Greg Mori
113
976
0
23 Jul 2019
Relational Knowledge Distillation
Relational Knowledge Distillation
Wonpyo Park
Dongju Kim
Yan Lu
Minsu Cho
63
1,410
0
10 Apr 2019
Correlation Congruence for Knowledge Distillation
Correlation Congruence for Knowledge Distillation
Baoyun Peng
Xiao Jin
Jiaheng Liu
Shunfeng Zhou
Yichao Wu
Yu Liu
Dongsheng Li
Zhaoning Zhang
86
511
0
03 Apr 2019
Learned Step Size Quantization
Learned Step Size Quantization
S. K. Esser
J. McKinstry
Deepika Bablani
R. Appuswamy
D. Modha
MQ
73
802
0
21 Feb 2019
LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep
  Neural Networks
LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks
Dongqing Zhang
Jiaolong Yang
Dongqiangzi Ye
G. Hua
MQ
59
703
0
26 Jul 2018
Quantizing deep convolutional networks for efficient inference: A
  whitepaper
Quantizing deep convolutional networks for efficient inference: A whitepaper
Raghuraman Krishnamoorthi
MQ
129
1,015
0
21 Jun 2018
PACT: Parameterized Clipping Activation for Quantized Neural Networks
PACT: Parameterized Clipping Activation for Quantized Neural Networks
Jungwook Choi
Zhuo Wang
Swagath Venkataramani
P. Chuang
Vijayalakshmi Srinivasan
K. Gopalakrishnan
MQ
58
948
0
16 May 2018
MobileNetV2: Inverted Residuals and Linear Bottlenecks
MobileNetV2: Inverted Residuals and Linear Bottlenecks
Mark Sandler
Andrew G. Howard
Menglong Zhu
A. Zhmoginov
Liang-Chieh Chen
173
19,262
0
13 Jan 2018
ShuffleNet: An Extremely Efficient Convolutional Neural Network for
  Mobile Devices
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
Xiangyu Zhang
Xinyu Zhou
Mengxiao Lin
Jian Sun
AI4TS
134
6,865
0
04 Jul 2017
Paying More Attention to Attention: Improving the Performance of
  Convolutional Neural Networks via Attention Transfer
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
Sergey Zagoruyko
N. Komodakis
116
2,578
0
12 Dec 2016
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural
  Networks
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
Mohammad Rastegari
Vicente Ordonez
Joseph Redmon
Ali Farhadi
MQ
161
4,353
0
16 Mar 2016
Identity Mappings in Deep Residual Networks
Identity Mappings in Deep Residual Networks
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
350
10,180
0
16 Mar 2016
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB
  model size
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
F. Iandola
Song Han
Matthew W. Moskewicz
Khalid Ashraf
W. Dally
Kurt Keutzer
139
7,477
0
24 Feb 2016
Predicting Depth, Surface Normals and Semantic Labels with a Common
  Multi-Scale Convolutional Architecture
Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture
David Eigen
Rob Fergus
VLM
MDE
205
2,679
0
18 Nov 2014
1