Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2106.13686
Cited By
Cross-Modal Knowledge Distillation Method for Automatic Cued Speech Recognition
25 June 2021
Jianrong Wang
Zi-yue Tang
Xuewei Li
Mei Yu
Qiang Fang
Li Liu
BDL
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Cross-Modal Knowledge Distillation Method for Automatic Cued Speech Recognition"
5 / 5 papers shown
Title
Re-synchronization using the Hand Preceding Model for Multi-modal Fusion in Automatic Continuous Cued Speech Recognition
Li Liu
G. Feng
D. Beautemps
Xiao-Ping Zhang
41
46
0
03 Jan 2020
Common Voice: A Massively-Multilingual Speech Corpus
Rosana Ardila
Megan Branson
Kelly Davis
Michael Henretty
M. Kohler
Josh Meyer
Reuben Morais
Lindsay Saunders
Francis M. Tyers
Gregor Weber
VLM
91
1,600
0
13 Dec 2019
Graph Distillation for Action Detection with Privileged Modalities
Zelun Luo
Jun-Ting Hsieh
Lu Jiang
Juan Carlos Niebles
Li Fei-Fei
78
104
0
30 Nov 2017
Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics
Alex Kendall
Y. Gal
R. Cipolla
3DH
272
3,123
0
19 May 2017
Speech Recognition with Deep Recurrent Neural Networks
Alex Graves
Abdel-rahman Mohamed
Geoffrey E. Hinton
228
8,523
0
22 Mar 2013
1