22
0

Confidence-Based Self-Training for EMG-to-Speech: Leveraging Synthetic EMG for Robust Modeling

Main:6 Pages
3 Figures
Bibliography:2 Pages
Abstract

Voiced Electromyography (EMG)-to-Speech (V-ETS) models reconstruct speech from muscle activity signals, facilitating applications such as neurolaryngologic diagnostics. Despite its potential, the advancement of V-ETS is hindered by a scarcity of paired EMG-speech data. To address this, we propose a novel Confidence-based Multi-Speaker Self-training (CoM2S) approach, along with a newly curated Libri-EMG dataset. This approach leverages synthetic EMG data generated by a pre-trained model, followed by a proposed filtering mechanism based on phoneme-level confidence to enhance the ETS model through the proposed self-training techniques. Experiments demonstrate our method improves phoneme accuracy, reduces phonological confusion, and lowers word error rate, confirming the effectiveness of our CoM2S approach for V-ETS. In support of future research, we will release the codes and the proposed Libri-EMG dataset-an open-access, time-aligned, multi-speaker voiced EMG and speech recordings.

View on arXiv
@article{chen2025_2506.11862,
  title={ Confidence-Based Self-Training for EMG-to-Speech: Leveraging Synthetic EMG for Robust Modeling },
  author={ Xiaodan Chen and Xiaoxue Gao and Mathias Quoy and Alexandre Pitti and Nancy F.Chen },
  journal={arXiv preprint arXiv:2506.11862},
  year={ 2025 }
}
Comments on this paper