ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.07173
19
34

Towards Voice Reconstruction from EEG during Imagined Speech

2 January 2023
Young-Eun Lee
Seo-Hyun Lee
Sang-Ho Kim
Seong-Whan Lee
ArXivPDFHTML
Abstract

Translating imagined speech from human brain activity into voice is a challenging and absorbing research issue that can provide new means of human communication via brain signals. Endeavors toward reconstructing speech from brain activity have shown their potential using invasive measures of spoken speech data, however, have faced challenges in reconstructing imagined speech. In this paper, we propose NeuroTalk, which converts non-invasive brain signals of imagined speech into the user's own voice. Our model was trained with spoken speech EEG which was generalized to adapt to the domain of imagined speech, thus allowing natural correspondence between the imagined speech and the voice as a ground truth. In our framework, automatic speech recognition decoder contributed to decomposing the phonemes of generated speech, thereby displaying the potential of voice reconstruction from unseen words. Our results imply the potential of speech synthesis from human EEG signals, not only from spoken speech but also from the brain signals of imagined speech.

View on arXiv
Comments on this paper