ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.12638
  4. Cited By
Distilling a Pretrained Language Model to a Multilingual ASR Model

Distilling a Pretrained Language Model to a Multilingual ASR Model

25 June 2022
Kwanghee Choi
Hyung-Min Park
    VLM
ArXivPDFHTML

Papers citing "Distilling a Pretrained Language Model to a Multilingual ASR Model"

4 / 4 papers shown
Title
CrossSpeech++: Cross-lingual Speech Synthesis with Decoupled Language and Speaker Generation
CrossSpeech++: Cross-lingual Speech Synthesis with Decoupled Language and Speaker Generation
Ji-Hoon Kim
Hong-Sun Yang
Yoon-Cheol Ju
Il-Hwan Kim
Byeong-Yeol Kim
Joon Son Chung
BDL
49
0
0
31 Dec 2024
Hierarchical Cross-Modality Knowledge Transfer with Sinkhorn Attention
  for CTC-based ASR
Hierarchical Cross-Modality Knowledge Transfer with Sinkhorn Attention for CTC-based ASR
Ambar Pal
Jeremias Sulam
Yu Tsao
René Vidal
18
2
0
28 Sep 2023
Cross-modal Alignment with Optimal Transport for CTC-based ASR
Cross-modal Alignment with Optimal Transport for CTC-based ASR
Xugang Lu
Peng Shen
Yu Tsao
Hisashi Kawai
22
4
0
24 Sep 2023
Comparison of L2 Korean pronunciation error patterns from five L1
  backgrounds by using automatic phonetic transcription
Comparison of L2 Korean pronunciation error patterns from five L1 backgrounds by using automatic phonetic transcription
E. Yeo
Hyungshin Ryu
Jooyoung Lee
Sunhee Kim
Minhwa Chung
19
3
0
19 Jun 2023
1