ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1610.09650
  4. Cited By
Deep Model Compression: Distilling Knowledge from Noisy Teachers

Deep Model Compression: Distilling Knowledge from Noisy Teachers

30 October 2016
Bharat Bhusan Sau
V. Balasubramanian
ArXivPDFHTML

Papers citing "Deep Model Compression: Distilling Knowledge from Noisy Teachers"

25 / 75 papers shown
Title
Extreme Low Resolution Activity Recognition with Confident
  Spatial-Temporal Attention Transfer
Extreme Low Resolution Activity Recognition with Confident Spatial-Temporal Attention Transfer
Yucai Bai
Qinglong Zou
Xieyuanli Chen
Lingxi Li
Zhengming Ding
Long Chen
20
3
0
09 Sep 2019
Patient Knowledge Distillation for BERT Model Compression
Patient Knowledge Distillation for BERT Model Compression
S. Sun
Yu Cheng
Zhe Gan
Jingjing Liu
78
832
0
25 Aug 2019
Highlight Every Step: Knowledge Distillation via Collaborative Teaching
Highlight Every Step: Knowledge Distillation via Collaborative Teaching
Haoran Zhao
Xin Sun
Junyu Dong
Changrui Chen
Zihe Dong
27
57
0
23 Jul 2019
Light Multi-segment Activation for Model Compression
Light Multi-segment Activation for Model Compression
Zhenhui Xu
Guolin Ke
Jia Zhang
Jiang Bian
Tie-Yan Liu
19
2
0
16 Jul 2019
Interpretable Few-Shot Learning via Linear Distillation
Interpretable Few-Shot Learning via Linear Distillation
Arip Asadulaev
Igor Kuznetsov
Andrey Filchenkov
FedML
FAtt
11
1
0
13 Jun 2019
OpenEI: An Open Framework for Edge Intelligence
OpenEI: An Open Framework for Edge Intelligence
Xingzhou Zhang
Yifan Wang
Sidi Lu
Liangkai Liu
Lanyu Xu
Weisong Shi
29
101
0
05 Jun 2019
The Pupil Has Become the Master: Teacher-Student Model-Based Word
  Embedding Distillation with Ensemble Learning
The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning
Bonggun Shin
Hao Yang
Jinho Choi
9
12
0
31 May 2019
Triplet Distillation for Deep Face Recognition
Triplet Distillation for Deep Face Recognition
Yushu Feng
Huan Wang
Daniel T. Yi
Roland Hu
CVBM
11
45
0
11 May 2019
Relational Knowledge Distillation
Relational Knowledge Distillation
Wonpyo Park
Dongju Kim
Yan Lu
Minsu Cho
16
1,387
0
10 Apr 2019
Correlation Congruence for Knowledge Distillation
Correlation Congruence for Knowledge Distillation
Baoyun Peng
Xiao Jin
Jiaheng Liu
Shunfeng Zhou
Yichao Wu
Yu Liu
Dongsheng Li
Zhaoning Zhang
63
507
0
03 Apr 2019
All You Need is a Few Shifts: Designing Efficient Convolutional Neural
  Networks for Image Classification
All You Need is a Few Shifts: Designing Efficient Convolutional Neural Networks for Image Classification
Weijie Chen
Di Xie
Yuan Zhang
Shiliang Pu
30
80
0
13 Mar 2019
Compressing complex convolutional neural network based on an improved
  deep compression algorithm
Compressing complex convolutional neural network based on an improved deep compression algorithm
Jiasong Wu
Hongshan Ren
Youyong Kong
Chunfeng Yang
L. Senhadji
H. Shu
21
5
0
06 Mar 2019
Improved Knowledge Distillation via Teacher Assistant
Improved Knowledge Distillation via Teacher Assistant
Seyed Iman Mirzadeh
Mehrdad Farajtabar
Ang Li
Nir Levine
Akihiro Matsukawa
H. Ghasemzadeh
53
1,067
0
09 Feb 2019
Reliable Identification of Redundant Kernels for Convolutional Neural
  Network Compression
Reliable Identification of Redundant Kernels for Convolutional Neural Network Compression
Wei Wang
Liqiang Zhu
CVBM
20
13
0
10 Dec 2018
Wireless Network Intelligence at the Edge
Wireless Network Intelligence at the Edge
Jihong Park
S. Samarakoon
M. Bennis
Mérouane Debbah
23
518
0
07 Dec 2018
To Compress, or Not to Compress: Characterizing Deep Learning Model
  Compression for Embedded Inference
To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference
Qing Qin
Jie Ren
Jia-Le Yu
Ling Gao
Hai Wang
Jie Zheng
Yansong Feng
Jianbin Fang
Zheng Wang
16
21
0
21 Oct 2018
Shift-based Primitives for Efficient Convolutional Neural Networks
Shift-based Primitives for Efficient Convolutional Neural Networks
Huasong Zhong
Xianggen Liu
Yihui He
Yuchun Ma
35
20
0
22 Sep 2018
RDPD: Rich Data Helps Poor Data via Imitation
RDPD: Rich Data Helps Poor Data via Imitation
linda Qiao
Cao Xiao
Trong Nghia Hoang
Tengfei Ma
Hongyan Li
Jimeng Sun
11
7
0
06 Sep 2018
Learning to Quantize Deep Networks by Optimizing Quantization Intervals
  with Task Loss
Learning to Quantize Deep Networks by Optimizing Quantization Intervals with Task Loss
S. Jung
Changyong Son
Seohyung Lee
JinWoo Son
Youngjun Kwak
Jae-Joon Han
Sung Ju Hwang
Changkyu Choi
MQ
25
373
0
17 Aug 2018
Efficient Fusion of Sparse and Complementary Convolutions
Efficient Fusion of Sparse and Complementary Convolutions
Chun-Fu Chen
Quanfu Fan
Marco Pistoia
G. Lee
21
0
0
07 Aug 2018
Knowledge Transfer with Jacobian Matching
Knowledge Transfer with Jacobian Matching
Suraj Srinivas
François Fleuret
29
169
0
01 Mar 2018
Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks
Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks
Aditya Chattopadhyay
Anirban Sarkar
Prantik Howlader
V. Balasubramanian
FAtt
39
2,257
0
30 Oct 2017
Model Distillation with Knowledge Transfer from Face Classification to
  Alignment and Verification
Model Distillation with Knowledge Transfer from Face Classification to Alignment and Verification
Chong-Jun Wang
Xipeng Lan
Yang Zhang
CVBM
15
26
0
09 Sep 2017
Training Shallow and Thin Networks for Acceleration via Knowledge
  Distillation with Conditional Adversarial Networks
Training Shallow and Thin Networks for Acceleration via Knowledge Distillation with Conditional Adversarial Networks
Zheng Xu
Yen-Chang Hsu
Jiawei Huang
GAN
37
12
0
02 Sep 2017
Sobolev Training for Neural Networks
Sobolev Training for Neural Networks
Wojciech M. Czarnecki
Simon Osindero
Max Jaderberg
G. Swirszcz
Razvan Pascanu
21
242
0
15 Jun 2017
Previous
12