ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1707.03634
57
247
v1v2v3 (latest)

Speaker-independent Speech Separation with Deep Attractor Network

12 July 2017
Yi Luo
Zhuo Chen
N. Mesgarani
ArXiv (abs)PDFHTML
Abstract

Despite the recent success of deep learning for many speech processing tasks, single-microphone speech separation remains challenging for two main reasons. One reason is the arbitrary order of the target and masker speakers in the mixture (permutation problem), and the second is the unknown number of speakers in the mixture (output dimension problem). We propose a novel deep learning framework for speech separation that addresses both of these important issues. We use a neural network to project the time-frequency representation of the mixture signal into a high-dimensional embedding space. A reference point (attractor) is created in the embedding space to pull together all the time-frequency bins that belong to that speaker. The attractor point for a speaker is formed by finding the centroid of the source in the embedding space which is then used to determine the source assignment. We propose three methods for finding the attractor points for each source, including unsupervised clustering, fixed attractor points, and fixed anchor points in the embedding space that guide the estimation of attractor points. The objective function for the network is standard signal reconstruction error which enables end-to-end operation during both the training and test phases. We evaluate our system on the Wall Street Journal dataset (WSJ0) on two and three speaker mixtures, and report comparable or better performance in comparison with other deep learning methods for speech separation.

View on arXiv
Comments on this paper