Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1904.04163
Cited By
Knowledge Distillation For Recurrent Neural Network Language Modeling With Trust Regularization
8 April 2019
Yangyang Shi
M. Hwang
X. Lei
Haoyu Sheng
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Knowledge Distillation For Recurrent Neural Network Language Modeling With Trust Regularization"
5 / 5 papers shown
Title
Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data
Kuluhan Binici
N. Pham
T. Mitra
K. Leman
28
40
0
11 Aug 2021
Spectral Pruning for Recurrent Neural Networks
Takashi Furuya
Kazuma Suetake
K. Taniguchi
Hiroyuki Kusumoto
Ryuji Saiin
Tomohiro Daimon
27
4
0
23 May 2021
A Variational Information Bottleneck Based Method to Compress Sequential Networks for Human Action Recognition
Ayush Srivastava
Oshin Dutta
A. Prathosh
Sumeet Agarwal
Jigyasa Gupta
12
8
0
03 Oct 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,843
0
09 Jun 2020
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
271
5,327
0
05 Nov 2016
1