ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.05470
  4. Cited By
On The Compensation Between Magnitude and Phase in Speech Separation

On The Compensation Between Magnitude and Phase in Speech Separation

11 August 2021
Zhong-Qiu Wang
G. Wichern
Jonathan Le Roux
ArXivPDFHTML

Papers citing "On The Compensation Between Magnitude and Phase in Speech Separation"

16 / 16 papers shown
Title
ConSep: a Noise- and Reverberation-Robust Speech Separation Framework by
  Magnitude Conditioning
ConSep: a Noise- and Reverberation-Robust Speech Separation Framework by Magnitude Conditioning
Kuan-Hsun Ho
J. Hung
Berlin Chen
42
0
0
04 Mar 2024
BASEN: Time-Domain Brain-Assisted Speech Enhancement Network with
  Convolutional Cross Attention in Multi-talker Conditions
BASEN: Time-Domain Brain-Assisted Speech Enhancement Network with Convolutional Cross Attention in Multi-talker Conditions
Jie Zhang
Qingquan Xu
Qiu-shi Zhu
Zhenhua Ling
21
11
0
17 May 2023
Towards Unified All-Neural Beamforming for Time and Frequency Domain
  Speech Separation
Towards Unified All-Neural Beamforming for Time and Frequency Domain Speech Separation
Rongzhi Gu
Shi-Xiong Zhang
Yuexian Zou
Dong Yu
AI4TS
22
24
0
16 Dec 2022
TF-GridNet: Integrating Full- and Sub-Band Modeling for Speech
  Separation
TF-GridNet: Integrating Full- and Sub-Band Modeling for Speech Separation
Zhongqiu Wang
Samuele Cornell
Shukjae Choi
Younglo Lee
Byeonghak Kim
Shinji Watanabe
32
119
0
22 Nov 2022
A Two-Stage Deep Representation Learning-Based Speech Enhancement Method
  Using Variational Autoencoder and Adversarial Training
A Two-Stage Deep Representation Learning-Based Speech Enhancement Method Using Variational Autoencoder and Adversarial Training
Yang Xiang
Jesper Lisby Højvang
M. Rasmussen
M. G. Christensen
DRL
23
5
0
16 Nov 2022
Online Phase Reconstruction via DNN-based Phase Differences Estimation
Online Phase Reconstruction via DNN-based Phase Differences Estimation
Yoshiki Masuyama
Kohei Yatabe
Kento Nagatomo
Yasuhiro Oikawa
3DV
13
7
0
12 Nov 2022
A deep representation learning speech enhancement method using
  $β$-VAE
A deep representation learning speech enhancement method using βββ-VAE
Yang Xiang
Jesper Lisby Højvang
M. Rasmussen
M. G. Christensen
DRL
21
2
0
11 May 2022
Speaker Reinforcement Using Target Source Extraction for Robust
  Automatic Speech Recognition
Speaker Reinforcement Using Target Source Extraction for Robust Automatic Speech Recognition
Catalin Zorila
R. Doddipatla
24
11
0
09 May 2022
Taylor, Can You Hear Me Now? A Taylor-Unfolding Framework for Monaural
  Speech Enhancement
Taylor, Can You Hear Me Now? A Taylor-Unfolding Framework for Monaural Speech Enhancement
Andong Li
Shan You
Guochen Yu
C. Zheng
Xiaodong Li
30
26
0
30 Apr 2022
Phase-Aware Deep Speech Enhancement: It's All About The Frame Length
Phase-Aware Deep Speech Enhancement: It's All About The Frame Length
Tal Peer
Timo Gerkmann
19
21
0
30 Mar 2022
CMGAN: Conformer-based Metric GAN for Speech Enhancement
CMGAN: Conformer-based Metric GAN for Speech Enhancement
Ru Cao
Sherif Abdulatif
Bin Yang
21
91
0
28 Mar 2022
MDNet: Learning Monaural Speech Enhancement from Deep Prior Gradient
MDNet: Learning Monaural Speech Enhancement from Deep Prior Gradient
Andong Li
C. Zheng
Ziyang Zhang
Xiaodong Li
21
3
0
14 Mar 2022
Dual-branch Attention-In-Attention Transformer for single-channel speech
  enhancement
Dual-branch Attention-In-Attention Transformer for single-channel speech enhancement
Guochen Yu
Andong Li
C. Zheng
Yinuo Guo
Yutian Wang
Hui Wang
35
84
0
13 Oct 2021
On Loss Functions for Supervised Monaural Time-Domain Speech Enhancement
On Loss Functions for Supervised Monaural Time-Domain Speech Enhancement
Morten Kolbæk
Zheng-Hua Tan
S. H. Jensen
Jesper Jensen
AAML
63
125
0
03 Sep 2019
Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source
  Separation
Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation
Daniel Stoller
Sebastian Ewert
S. Dixon
AI4TS
104
588
0
08 Jun 2018
End-to-End Speech Separation with Unfolded Iterative Phase
  Reconstruction
End-to-End Speech Separation with Unfolded Iterative Phase Reconstruction
Zhong-Qiu Wang
Jonathan Le Roux
DeLiang Wang
J. Hershey
96
123
0
26 Apr 2018
1