Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2302.13750
Cited By
MoLE : Mixture of Language Experts for Multi-Lingual Automatic Speech Recognition
27 February 2023
Yoohwan Kwon
Soo-Whan Chung
MoE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"MoLE : Mixture of Language Experts for Multi-Lingual Automatic Speech Recognition"
5 / 5 papers shown
Title
Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts
Guorui Zheng
Xidong Wang
Juhao Liang
Nuo Chen
Yuping Zheng
Benyou Wang
MoE
35
5
0
14 Oct 2024
LAE-ST-MoE: Boosted Language-Aware Encoder Using Speech Translation Auxiliary Task for E2E Code-switching ASR
Guodong Ma
Wenxuan Wang
Yuke Li
Yuting Yang
Binbin Du
Haoran Fu
28
5
0
28 Sep 2023
Multilingual Speech Recognition using Knowledge Transfer across Learning Processes
Rimita Lahiri
K. Kumatani
Eric Sun
Yao Qian
55
6
0
15 Oct 2021
A Configurable Multilingual Model is All You Need to Recognize All Languages
Long Zhou
Jinyu Li
Eric Sun
Shujie Liu
92
40
0
13 Jul 2021
Signal Transformer: Complex-valued Attention and Meta-Learning for Signal Recognition
Yihong Dong
Ying Peng
Muqiao Yang
Songtao Lu
Qingjiang Shi
40
9
0
05 Jun 2021
1