ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.02014
6
84

Integration of speech separation, diarization, and recognition for multi-speaker meetings: System description, comparison, and analysis

3 November 2020
Desh Raj
Pavel Denisov
Zhuo Chen
Hakan Erdogan
Zili Huang
Maokui He
Shinji Watanabe
Jun Du
Takuya Yoshioka
Yi Luo
Naoyuki Kanda
Jinyu Li
Scott Wisdom
J. Hershey
ArXivPDFHTML
Abstract

Multi-speaker speech recognition of unsegmented recordings has diverse applications such as meeting transcription and automatic subtitle generation. With technical advances in systems dealing with speech separation, speaker diarization, and automatic speech recognition (ASR) in the last decade, it has become possible to build pipelines that achieve reasonable error rates on this task. In this paper, we propose an end-to-end modular system for the LibriCSS meeting data, which combines independently trained separation, diarization, and recognition components, in that order. We study the effect of different state-of-the-art methods at each stage of the pipeline, and report results using task-specific metrics like SDR and DER, as well as downstream WER. Experiments indicate that the problem of overlapping speech for diarization and ASR can be effectively mitigated with the presence of a well-trained separation module. Our best system achieves a speaker-attributed WER of 12.7%, which is close to that of a non-overlapping ASR.

View on arXiv
Comments on this paper