Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2410.06846
Cited By
Joint Fine-tuning and Conversion of Pretrained Speech and Language Models towards Linear Complexity
9 October 2024
Mutian He
Philip N. Garner
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Joint Fine-tuning and Conversion of Pretrained Speech and Language Models towards Linear Complexity"
6 / 56 papers shown
Title
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.5K
94,511
0
11 Oct 2018
TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation
François Hernandez
Vincent Nguyen
Sahar Ghannay
N. Tomashenko
Yannick Esteve
VLM
73
347
0
12 May 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
884
7,141
0
20 Apr 2018
Explicit Inductive Bias for Transfer Learning with Convolutional Networks
Xuhong Li
Yves Grandvalet
Franck Davoine
SSL
74
354
0
05 Feb 2018
SQuAD: 100,000+ Questions for Machine Comprehension of Text
Pranav Rajpurkar
Jian Zhang
Konstantin Lopyrev
Percy Liang
RALM
231
8,113
0
16 Jun 2016
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
310
19,609
0
09 Mar 2015
Previous
1
2