ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.13686
  4. Cited By
Cross-Modal Knowledge Distillation Method for Automatic Cued Speech
  Recognition

Cross-Modal Knowledge Distillation Method for Automatic Cued Speech Recognition

25 June 2021
Jianrong Wang
Zi-yue Tang
Xuewei Li
Mei Yu
Qiang Fang
Li Liu
    BDL
ArXiv (abs)PDFHTML

Papers citing "Cross-Modal Knowledge Distillation Method for Automatic Cued Speech Recognition"

5 / 5 papers shown
Title
Re-synchronization using the Hand Preceding Model for Multi-modal Fusion
  in Automatic Continuous Cued Speech Recognition
Re-synchronization using the Hand Preceding Model for Multi-modal Fusion in Automatic Continuous Cued Speech Recognition
Li Liu
G. Feng
D. Beautemps
Xiao-Ping Zhang
41
46
0
03 Jan 2020
Common Voice: A Massively-Multilingual Speech Corpus
Common Voice: A Massively-Multilingual Speech Corpus
Rosana Ardila
Megan Branson
Kelly Davis
Michael Henretty
M. Kohler
Josh Meyer
Reuben Morais
Lindsay Saunders
Francis M. Tyers
Gregor Weber
VLM
91
1,600
0
13 Dec 2019
Graph Distillation for Action Detection with Privileged Modalities
Graph Distillation for Action Detection with Privileged Modalities
Zelun Luo
Jun-Ting Hsieh
Lu Jiang
Juan Carlos Niebles
Li Fei-Fei
78
104
0
30 Nov 2017
Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry
  and Semantics
Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics
Alex Kendall
Y. Gal
R. Cipolla
3DH
272
3,123
0
19 May 2017
Speech Recognition with Deep Recurrent Neural Networks
Speech Recognition with Deep Recurrent Neural Networks
Alex Graves
Abdel-rahman Mohamed
Geoffrey E. Hinton
228
8,523
0
22 Mar 2013
1