32
0

Pretraining Large Brain Language Model for Active BCI: Silent Speech

Abstract

This paper explores silent speech decoding in active brain-computer interface (BCI) systems, which offer more natural and flexible communication than traditional BCI applications. We collected a new silent speech dataset of over 120 hours of electroencephalogram (EEG) recordings from 12 subjects, capturing 24 commonly used English words for language model pretraining and decoding. Following the recent success of pretraining large models with self-supervised paradigms to enhance EEG classification performance, we propose Large Brain Language Model (LBLM) pretrained to decode silent speech for active BCI. To pretrain LBLM, we propose Future Spectro-Temporal Prediction (FSTP) pretraining paradigm to learn effective representations from unlabeled EEG data. Unlike existing EEG pretraining methods that mainly follow a masked-reconstruction paradigm, our proposed FSTP method employs autoregressive modeling in temporal and frequency domains to capture both temporal and spectral dependencies from EEG signals. After pretraining, we finetune our LBLM on downstream tasks, including word-level and semantic-level classification. Extensive experiments demonstrate significant performance gains of the LBLM over fully-supervised and pretrained baseline models. For instance, in the difficult cross-session setting, our model achieves 47.0\% accuracy on semantic-level classification and 39.6\% in word-level classification, outperforming baseline methods by 5.4\% and 7.3\%, respectively. Our research advances silent speech decoding in active BCI systems, offering an innovative solution for EEG language model pretraining and a new dataset for fundamental research.

View on arXiv
@article{zhou2025_2504.21214,
  title={ Pretraining Large Brain Language Model for Active BCI: Silent Speech },
  author={ Jinzhao Zhou and Zehong Cao and Yiqun Duan and Connor Barkley and Daniel Leong and Xiaowei Jiang and Quoc-Toan Nguyen and Ziyi Zhao and Thomas Do and Yu-Cheng Chang and Sheng-Fu Liang and Chin-teng Lin },
  journal={arXiv preprint arXiv:2504.21214},
  year={ 2025 }
}
Comments on this paper