ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.04979
23
74

Advances in Online Audio-Visual Meeting Transcription

10 December 2019
Takuya Yoshioka
Igor Abramovski
Cem Aksoylar
Zhuo Chen
Moshe David
Dimitrios Dimitriadis
Jiawei Liu
I. Gurvich
Xuedong Huang
Yan-ping Huang
Aviv Hurvitz
Li Jiang
S. Koubi
Eyal Krupka
Ido Leichter
Changliang Liu
P. Parthasarathy
Alon Vinnikov
Lingfeng Wu
Xiong Xiao
Wayne Xiong
Huaming Wang
Zhenghao Wang
Jun Zhang
Yong Zhao
Tianyan Zhou
ArXivPDFHTML
Abstract

This paper describes a system that generates speaker-annotated transcripts of meetings by using a microphone array and a 360-degree camera. The hallmark of the system is its ability to handle overlapped speech, which has been an unsolved problem in realistic settings for over a decade. We show that this problem can be addressed by using a continuous speech separation approach. In addition, we describe an online audio-visual speaker diarization method that leverages face tracking and identification, sound source localization, speaker identification, and, if available, prior speaker information for robustness to various real world challenges. All components are integrated in a meeting transcription framework called SRD, which stands for "separate, recognize, and diarize". Experimental results using recordings of natural meetings involving up to 11 attendees are reported. The continuous speech separation improves a word error rate (WER) by 16.1% compared with a highly tuned beamformer. When a complete list of meeting attendees is available, the discrepancy between WER and speaker-attributed WER is only 1.0%, indicating accurate word-to-speaker association. This increases marginally to 1.6% when 50% of the attendees are unknown to the system.

View on arXiv
Comments on this paper