A Silent Speech Decoding System from EEG and EMG with Heterogenous Electrode Configurations

Silent speech decoding, which performs unvocalized human speech recognition from electroencephalography/electromyography (EEG/EMG), increases accessibility for speech-impaired humans. However, data collection is difficult and performed using varying experimental setups, making it nontrivial to collect a large, homogeneous dataset. In this study we introduce neural networks that can handle EEG/EMG with heterogeneous electrode placements and show strong performance in silent speech decoding via multi-task training on large-scale EEG/EMG datasets. We achieve improved word classification accuracy in both healthy participants (95.3%), and a speech-impaired patient (54.5%), substantially outperforming models trained on single-subject data (70.1% and 13.2%). Moreover, our models also show gains in cross-language calibration performance. This increase in accuracy suggests the feasibility of developing practical silent speech decoding systems, particularly for speech-impaired patients.
View on arXiv@article{inoue2025_2506.13835, title={ A Silent Speech Decoding System from EEG and EMG with Heterogenous Electrode Configurations }, author={ Masakazu Inoue and Motoshige Sato and Kenichi Tomeoka and Nathania Nah and Eri Hatakeyama and Kai Arulkumaran and Ilya Horiguchi and Shuntaro Sasai }, journal={arXiv preprint arXiv:2506.13835}, year={ 2025 } }