Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1711.05852
Cited By
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
15 November 2017
Asit K. Mishra
Debbie Marr
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy"
50 / 72 papers shown
Title
Breaking the Memory Wall for Heterogeneous Federated Learning via Model Splitting
Chunlin Tian
Li Li
Kahou Tam
Yebo Wu
Chengzhong Xu
FedML
29
1
0
12 Oct 2024
Efficient Knowledge Distillation: Empowering Small Language Models with Teacher Model Insights
Mohamad Ballout
U. Krumnack
Gunther Heidemann
Kai-Uwe Kühnberger
35
2
0
19 Sep 2024
Relational Representation Distillation
Nikolaos Giakoumoglou
Tania Stathaki
40
0
0
16 Jul 2024
CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small Language Models
Xuechen Liang
Meiling Tao
Yinghui Xia
Yiting Xie
Jun Wang
JingSong Yang
LLMAG
33
12
0
02 Apr 2024
Quantized Feature Distillation for Network Quantization
Kevin Zhu
Yin He
Jianxin Wu
MQ
29
9
0
20 Jul 2023
Multimodal Distillation for Egocentric Action Recognition
Gorjan Radevski
Dusan Grujicic
Marie-Francine Moens
Matthew Blaschko
Tinne Tuytelaars
EgoV
26
23
0
14 Jul 2023
Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models
James OÑeill
Sourav Dutta
VLM
MQ
40
1
0
12 Jul 2023
MobileNMT: Enabling Translation in 15MB and 30ms
Ye Lin
Xiaohui Wang
Zhexi Zhang
Mingxuan Wang
Tong Xiao
Jingbo Zhu
MQ
30
1
0
07 Jun 2023
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs
A. M. Ribeiro-dos-Santos
João Dinis Ferreira
O. Mutlu
G. Falcão
MQ
21
1
0
15 Jan 2023
Vertical Layering of Quantized Neural Networks for Heterogeneous Inference
Hai Wu
Ruifei He
Hao Hao Tan
Xiaojuan Qi
Kaibin Huang
MQ
24
2
0
10 Dec 2022
QFT: Post-training quantization via fast joint finetuning of all degrees of freedom
Alexander Finkelstein
Ella Fuchs
Idan Tal
Mark Grobman
Niv Vosco
Eldad Meller
MQ
29
6
0
05 Dec 2022
BiViT: Extremely Compressed Binary Vision Transformer
Yefei He
Zhenyu Lou
Luoming Zhang
Jing Liu
Weijia Wu
Hong Zhou
Bohan Zhuang
ViT
MQ
20
28
0
14 Nov 2022
AskewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural Networks
Louis Leconte
S. Schechtman
Eric Moulines
29
4
0
07 Nov 2022
Fast and Low-Memory Deep Neural Networks Using Binary Matrix Factorization
Alireza Bordbar
M. Kahaei
MQ
33
0
0
24 Oct 2022
Designing and Training of Lightweight Neural Networks on Edge Devices using Early Halting in Knowledge Distillation
Rahul Mishra
Hari Prabhat Gupta
40
8
0
30 Sep 2022
Mixed-Precision Neural Networks: A Survey
M. Rakka
M. Fouda
Pramod P. Khargonekar
Fadi J. Kurdahi
MQ
25
11
0
11 Aug 2022
Overlooked Poses Actually Make Sense: Distilling Privileged Knowledge for Human Motion Prediction
Xiaoning Sun
Qiongjie Cui
Huaijiang Sun
Bin Li
Weiqing Li
Jianfeng Lu
29
7
0
02 Aug 2022
BiTAT: Neural Network Binarization with Task-dependent Aggregated Transformation
Geondo Park
Jaehong Yoon
H. Zhang
Xingge Zhang
Sung Ju Hwang
Yonina C. Eldar
MQ
28
1
0
04 Jul 2022
Knowledge Distillation via the Target-aware Transformer
Sihao Lin
Hongwei Xie
Bing Wang
Kaicheng Yu
Xiaojun Chang
Xiaodan Liang
G. Wang
ViT
20
104
0
22 May 2022
PCA-Based Knowledge Distillation Towards Lightweight and Content-Style Balanced Photorealistic Style Transfer Models
Tai-Yin Chiu
Danna Gurari
23
19
0
25 Mar 2022
FxP-QNet: A Post-Training Quantizer for the Design of Mixed Low-Precision DNNs with Dynamic Fixed-Point Representation
Ahmad Shawahna
S. M. Sait
A. El-Maleh
Irfan Ahmad
MQ
20
6
0
22 Mar 2022
Neural Network Quantization for Efficient Inference: A Survey
Olivia Weng
MQ
28
23
0
08 Dec 2021
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Weixin Xu
Zipeng Feng
Shuangkang Fang
Song Yuan
Yi Yang
Shuchang Zhou
MQ
30
1
0
01 Nov 2021
Adaptive Distillation: Aggregating Knowledge from Multiple Paths for Efficient Distillation
Sumanth Chennupati
Mohammad Mahdi Kamani
Zhongwei Cheng
Lin Chen
26
4
0
19 Oct 2021
Secure Your Ride: Real-time Matching Success Rate Prediction for Passenger-Driver Pairs
Yuandong Wang
Hongzhi Yin
Lian Wu
Tong Chen
Chunyang Liu
18
7
0
14 Sep 2021
Auto-Split: A General Framework of Collaborative Edge-Cloud AI
Amin Banitalebi-Dehkordi
Naveen Vedula
J. Pei
Fei Xia
Lanjun Wang
Yong Zhang
22
89
0
30 Aug 2021
Knowledge Distillation via Instance-level Sequence Learning
Haoran Zhao
Xin Sun
Junyu Dong
Zihe Dong
Qiong Li
28
23
0
21 Jun 2021
How Low Can We Go: Trading Memory for Error in Low-Precision Training
Chengrun Yang
Ziyang Wu
Jerry Chee
Christopher De Sa
Madeleine Udell
18
2
0
17 Jun 2021
Knowledge distillation: A good teacher is patient and consistent
Lucas Beyer
Xiaohua Zhai
Amelie Royer
L. Markeeva
Rohan Anil
Alexander Kolesnikov
VLM
38
287
0
09 Jun 2021
Extremely Lightweight Quantization Robust Real-Time Single-Image Super Resolution for Mobile Devices
Mustafa Ayazoglu
8
56
0
21 May 2021
"BNN - BN = ?": Training Binary Neural Networks without Batch Normalization
Tianlong Chen
Zhenyu (Allen) Zhang
Xu Ouyang
Zechun Liu
Zhiqiang Shen
Zhangyang Wang
MQ
43
36
0
16 Apr 2021
Training Multi-bit Quantized and Binarized Networks with A Learnable Symmetric Quantizer
Phuoc Pham
J. Abraham
Jaeyong Chung
MQ
37
11
0
01 Apr 2021
VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision Neural Network Inference
Steve Dai
Rangharajan Venkatesan
Haoxing Ren
B. Zimmer
W. Dally
Brucek Khailany
MQ
30
67
0
08 Feb 2021
SEED: Self-supervised Distillation For Visual Representation
Zhiyuan Fang
Jianfeng Wang
Lijuan Wang
Lei Zhang
Yezhou Yang
Zicheng Liu
SSL
245
190
0
12 Jan 2021
I-BERT: Integer-only BERT Quantization
Sehoon Kim
A. Gholami
Z. Yao
Michael W. Mahoney
Kurt Keutzer
MQ
105
341
0
05 Jan 2021
Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression
Cody Blakeney
Xiaomin Li
Yan Yan
Ziliang Zong
46
39
0
05 Dec 2020
BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures
Tianchen Zhao
Xuefei Ning
Xiangsheng Shi
Songyi Yang
Shuang Liang
Peng Lei
Jianfei Chen
Huazhong Yang
Yu Wang
MQ
26
7
0
21 Nov 2020
Ensemble Knowledge Distillation for CTR Prediction
Jieming Zhu
Jinyang Liu
Weiqi Li
Jincai Lai
Xiuqiang He
Liang Chen
Zibin Zheng
36
56
0
08 Nov 2020
Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
Yoonho Boo
Sungho Shin
Jungwook Choi
Wonyong Sung
MQ
30
29
0
30 Sep 2020
Classification of Diabetic Retinopathy Using Unlabeled Data and Knowledge Distillation
Sajjad Abbasi
M. Hajabdollahi
P. Khadivi
N. Karimi
Roshank Roshandel
S. Shirani
S. Samavi
14
18
0
01 Sep 2020
NASB: Neural Architecture Search for Binary Convolutional Neural Networks
Baozhou Zhu
Zaid Al-Ars
P. Hofstee
MQ
24
23
0
08 Aug 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,843
0
09 Jun 2020
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
Binary Neural Networks: A Survey
Haotong Qin
Ruihao Gong
Xianglong Liu
Xiao Bai
Jingkuan Song
N. Sebe
MQ
50
458
0
31 Mar 2020
Evolutionary Bin Packing for Memory-Efficient Dataflow Inference Acceleration on FPGA
M. Kroes
L. Petrica
S. Cotofana
Michaela Blott
21
7
0
24 Mar 2020
Ordering Chaos: Memory-Aware Scheduling of Irregularly Wired Neural Networks for Edge Devices
Byung Hoon Ahn
Jinwon Lee
J. Lin
Hsin-Pai Cheng
Jilei Hou
H. Esmaeilzadeh
76
55
0
04 Mar 2020
Noisy Machines: Understanding Noisy Neural Networks and Enhancing Robustness to Analog Hardware Errors Using Distillation
Chuteng Zhou
Prad Kadambi
Matthew Mattina
P. Whatmough
19
35
0
14 Jan 2020
Resource-Efficient Neural Networks for Embedded Systems
Wolfgang Roth
Günther Schindler
Lukas Pfeifenberger
Robert Peharz
Sebastian Tschiatschek
Holger Fröning
Franz Pernkopf
Zoubin Ghahramani
34
47
0
07 Jan 2020
Modeling Teacher-Student Techniques in Deep Neural Networks for Knowledge Distillation
Sajjad Abbasi
M. Hajabdollahi
N. Karimi
S. Samavi
10
28
0
31 Dec 2019
Towards Unified INT8 Training for Convolutional Neural Network
Feng Zhu
Ruihao Gong
F. Yu
Xianglong Liu
Yanfei Wang
Zhelong Li
Xiuqi Yang
Junjie Yan
MQ
35
150
0
29 Dec 2019
1
2
Next