ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.03957
  4. Cited By
Extracting the Auditory Attention in a Dual-Speaker Scenario from EEG
  using a Joint CNN-LSTM Model
v1v2 (latest)

Extracting the Auditory Attention in a Dual-Speaker Scenario from EEG using a Joint CNN-LSTM Model

8 February 2021
Ivine Kuruvila
J. Muncke
Eghart Fischer
U. Hoppe
ArXiv (abs)PDFHTML

Papers citing "Extracting the Auditory Attention in a Dual-Speaker Scenario from EEG using a Joint CNN-LSTM Model"

2 / 2 papers shown
Title
EEG-Derived Voice Signature for Attended Speaker Detection
EEG-Derived Voice Signature for Attended Speaker Detection
Hongxu Zhu
Siqi Cai
Yidi Jiang
Qiquan Zhang
Haizhou Li
39
0
0
28 Aug 2023
Relating EEG to continuous speech using deep neural networks: a review
Relating EEG to continuous speech using deep neural networks: a review
Corentin Puffay
Bernd Accou
Lies Bollens
Mohammad Jalilpour-Monesi
Jonas Vanthornhout
Hugo Van hamme
T. Francart
80
42
0
03 Feb 2023
1