Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1909.03508
Cited By
Transformer to CNN: Label-scarce distillation for efficient text classification
8 September 2019
Yew Ken Chia
Sam Witteveen
Martin Andrews
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Transformer to CNN: Label-scarce distillation for efficient text classification"
5 / 5 papers shown
Title
Sparse Distillation: Speeding Up Text Classification by Using Bigger Student Models
Qinyuan Ye
Madian Khabsa
M. Lewis
Sinong Wang
Xiang Ren
Aaron Jaech
39
5
0
16 Oct 2021
Low-Latency Incremental Text-to-Speech Synthesis with Distilled Context Prediction Network
Takaaki Saeki
Shinnosuke Takamichi
Hiroshi Saruwatari
34
3
0
22 Sep 2021
GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy Efficient Inference
Ali Hadi Zadeh
Isak Edo
Omar Mohamed Awad
Andreas Moshovos
MQ
30
185
0
08 May 2020
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
246
1,454
0
18 Mar 2020
Convolutional Neural Networks for Sentence Classification
Yoon Kim
AILaw
VLM
312
13,373
0
25 Aug 2014
1