ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.02508
  4. Cited By
SDR - half-baked or well done?

SDR - half-baked or well done?

6 November 2018
F. Sánchez-Martínez
M. Esplà-Gomis
Hakan Erdogan
J. Hershey
ArXivPDFHTML

Papers citing "SDR - half-baked or well done?"

50 / 614 papers shown
Title
Asteroid: the PyTorch-based audio source separation toolkit for
  researchers
Asteroid: the PyTorch-based audio source separation toolkit for researchers
Manuel Pariente
Samuele Cornell
Joris Cosentino
S. Sivasankaran
Efthymios Tzinis
...
Juan M. Martín-Donas
David Ditter
Ariel Frank
Antoine Deleforge
Emmanuel Vincent
27
153
0
08 May 2020
Time-domain speaker extraction network
Time-domain speaker extraction network
Chenglin Xu
Wei Rao
Chng Eng Siong
Haizhou Li
28
55
0
29 Apr 2020
Neural Speech Separation Using Spatially Distributed Microphones
Neural Speech Separation Using Spatially Distributed Microphones
Dongmei Wang
Zhuo Chen
Takuya Yoshioka
28
38
0
28 Apr 2020
SpEx: Multi-Scale Time Domain Speaker Extraction Network
SpEx: Multi-Scale Time Domain Speaker Extraction Network
Chenglin Xu
Wei Rao
Eng Siong Chng
Haizhou Li
31
167
0
17 Apr 2020
Two-stage model and optimal SI-SNR for monaural multi-speaker speech separation in noisy environment
Chao Ma
Dongmei Li
Xupeng Jia
19
5
0
14 Apr 2020
MM Algorithms for Joint Independent Subspace Analysis with Application
  to Blind Single and Multi-Source Extraction
MM Algorithms for Joint Independent Subspace Analysis with Application to Blind Single and Multi-Source Extraction
Robin Scheibler
Nobutaka Ono
13
14
0
08 Apr 2020
Conditioned Source Separation for Music Instrument Performances
Conditioned Source Separation for Music Instrument Performances
Olga Slizovskaia
G. Haro
E. Gómez
30
38
0
08 Apr 2020
Separating Varying Numbers of Sources with Auxiliary Autoencoding Loss
Separating Varying Numbers of Sources with Auxiliary Autoencoding Loss
Yi Luo
N. Mesgarani
21
29
0
27 Mar 2020
Improving noise robust automatic speech recognition with single-channel
  time-domain enhancement network
Improving noise robust automatic speech recognition with single-channel time-domain enhancement network
K. Kinoshita
Tsubasa Ochiai
Marc Delcroix
Tomohiro Nakatani
21
97
0
09 Mar 2020
Enhancing End-to-End Multi-channel Speech Separation via Spatial Feature
  Learning
Enhancing End-to-End Multi-channel Speech Separation via Spatial Feature Learning
Rongzhi Gu
Shi-Xiong Zhang
Lianwu Chen
Yong-mei Xu
Meng Yu
Dan Su
Yuexian Zou
Dong Yu
13
59
0
09 Mar 2020
Multi-Microphone Complex Spectral Mapping for Speech Dereverberation
Multi-Microphone Complex Spectral Mapping for Speech Dereverberation
Zhong-Qiu Wang
DeLiang Wang
27
61
0
04 Mar 2020
Unsupervised Interpretable Representation Learning for Singing Voice
  Separation
Unsupervised Interpretable Representation Learning for Singing Voice Separation
S. I. Mimilakis
Konstantinos Drossos
G. Schuller
33
8
0
03 Mar 2020
Voice Separation with an Unknown Number of Multiple Speakers
Voice Separation with an Unknown Number of Multiple Speakers
Eliya Nachmani
Yossi Adi
Lior Wolf
22
175
0
29 Feb 2020
Wavesplit: End-to-End Speech Separation by Speaker Clustering
Wavesplit: End-to-End Speech Separation by Speaker Clustering
Neil Zeghidour
David Grangier
VLM
27
261
0
20 Feb 2020
An empirical study of Conv-TasNet
An empirical study of Conv-TasNet
Berkan Kadıoğlu
Michael Horgan
Xiaoyu Liu
Jordi Pons
Dan Darcy
Vivek Kumar
22
43
0
20 Feb 2020
Real-time binaural speech separation with preserved spatial cues
Real-time binaural speech separation with preserved spatial cues
Cong Han
Yi Luo
N. Mesgarani
38
42
0
16 Feb 2020
Speech Enhancement using Self-Adaptation and Multi-Head Self-Attention
Speech Enhancement using Self-Adaptation and Multi-Head Self-Attention
Yuma Koizumi
Kohei Yatabe
Marc Delcroix
Yoshiki Masuyama
Daiki Takeuchi
24
125
0
14 Feb 2020
WaveTTS: Tacotron-based TTS with Joint Time-Frequency Domain Loss
WaveTTS: Tacotron-based TTS with Joint Time-Frequency Domain Loss
Rui Liu
Berrak Sisman
F. Bao
Guanglai Gao
Haizhou Li
16
14
0
02 Feb 2020
Continuous speech separation: dataset and analysis
Continuous speech separation: dataset and analysis
Zhuo Chen
Takuya Yoshioka
Liang Lu
Tianyan Zhou
Zhong Meng
Yi Luo
Jian Wu
Xiong Xiao
Jinyu Li
28
211
0
30 Jan 2020
Weighted Speech Distortion Losses for Neural-network-based Real-time
  Speech Enhancement
Weighted Speech Distortion Losses for Neural-network-based Real-time Speech Enhancement
Yangyang Xia
Sebastian Braun
Chandan K. A. Reddy
Harishchandra Dubey
Ross Cutler
I. Tashev
31
119
0
28 Jan 2020
CLCNet: Deep learning-based Noise Reduction for Hearing Aids using
  Complex Linear Coding
CLCNet: Deep learning-based Noise Reduction for Hearing Aids using Complex Linear Coding
Hendrik Schröter
T. Rosenkranz
Alberto N. Escalante
Marc Aubreville
Andreas Maier
27
20
0
28 Jan 2020
Improving speaker discrimination of target speech extraction with
  time-domain SpeakerBeam
Improving speaker discrimination of target speech extraction with time-domain SpeakerBeam
Marc Delcroix
Tsubasa Ochiai
Kateřina Žmolíková
K. Kinoshita
Naohiro Tawara
Tomohiro Nakatani
S. Araki
26
122
0
23 Jan 2020
LaFurca: Iterative Refined Speech Separation Based on Context-Aware
  Dual-Path Parallel Bi-LSTM
LaFurca: Iterative Refined Speech Separation Based on Context-Aware Dual-Path Parallel Bi-LSTM
Ziqiang Shi
Rujie Liu
Jiqing Han
19
4
0
23 Jan 2020
Temporal-Spatial Neural Filter: Direction Informed End-to-End
  Multi-channel Target Speech Separation
Temporal-Spatial Neural Filter: Direction Informed End-to-End Multi-channel Target Speech Separation
Rongzhi Gu
Yuexian Zou
33
18
0
02 Jan 2020
End-to-end training of time domain audio separation and recognition
End-to-end training of time domain audio separation and recognition
Thilo von Neumann
K. Kinoshita
Lukas Drude
Christoph Boeddeker
Marc Delcroix
Tomohiro Nakatani
Reinhold Haeb-Umbach
25
34
0
18 Dec 2019
A Supervised Speech enhancement Approach with Residual Noise Control for
  Voice Communication
A Supervised Speech enhancement Approach with Residual Noise Control for Voice Communication
Andong Li
C. Zheng
Xiaodong Li
38
8
0
08 Dec 2019
Invertible DNN-based nonlinear time-frequency transform for speech
  enhancement
Invertible DNN-based nonlinear time-frequency transform for speech enhancement
Daiki Takeuchi
Kohei Yatabe
Yuma Koizumi
Yasuhiro Oikawa
Noboru Harada
30
10
0
25 Nov 2019
Joint NN-Supported Multichannel Reduction of Acoustic Echo,
  Reverberation and Noise
Joint NN-Supported Multichannel Reduction of Acoustic Echo, Reverberation and Noise
Guillaume Carbajal
Romain Serizel
Emmanuel Vincent
E. Humbert
25
5
0
20 Nov 2019
Sequential Multi-Frame Neural Beamforming for Speech Separation and
  Enhancement
Sequential Multi-Frame Neural Beamforming for Speech Separation and Enhancement
Zhong-Qiu Wang
Hakan Erdogan
Scott Wisdom
K. Wilson
Desh Raj
Shinji Watanabe
Zhuo Chen
J. Hershey
25
1
0
18 Nov 2019
Improving Universal Sound Separation Using Sound Classification
Improving Universal Sound Separation Using Sound Classification
Efthymios Tzinis
Scott Wisdom
J. Hershey
A. Jansen
D. Ellis
VLM
34
73
0
18 Nov 2019
Online Spectrogram Inversion for Low-Latency Audio Source Separation
Online Spectrogram Inversion for Low-Latency Audio Source Separation
P. Magron
Tuomas Virtanen
22
12
0
08 Nov 2019
Finding Strength in Weakness: Learning to Separate Sounds with Weak
  Supervision
Finding Strength in Weakness: Learning to Separate Sounds with Weak Supervision
Fatemeh Pishdadian
Gordon Wichern
Jonathan Le Roux
24
42
0
06 Nov 2019
Closing the Training/Inference Gap for Deep Attractor Networks
Closing the Training/Inference Gap for Deep Attractor Networks
C. Cadoux
Stefan Uhlich
Marc Ferras
Yuki Mitsufuji
19
2
0
05 Nov 2019
End-to-end Non-Negative Autoencoders for Sound Source Separation
End-to-end Non-Negative Autoencoders for Sound Source Separation
Shrikant Venkataramani
Efthymios Tzinis
Paris Smaragdis
22
5
0
31 Oct 2019
End-to-end Microphone Permutation and Number Invariant Multi-channel
  Speech Separation
End-to-end Microphone Permutation and Number Invariant Multi-channel Speech Separation
Yi Luo
Zhuo Chen
N. Mesgarani
Takuya Yoshioka
22
178
0
30 Oct 2019
SMS-WSJ: Database, performance measures, and baseline recipe for
  multi-channel source separation and recognition
SMS-WSJ: Database, performance measures, and baseline recipe for multi-channel source separation and recognition
Lukas Drude
Jens Heitkaemper
Christoph Boeddeker
Reinhold Haeb-Umbach
16
72
0
30 Oct 2019
A Recurrent Variational Autoencoder for Speech Enhancement
A Recurrent Variational Autoencoder for Speech Enhancement
Simon Leglaive
Xavier Alameda-Pineda
Laurent Girin
Radu Horaud
DRL
22
78
0
24 Oct 2019
Model selection for deep audio source separation via clustering analysis
Model selection for deep audio source separation via clustering analysis
Alisa Liu
Prem Seetharaman
Bryan Pardo
25
2
0
23 Oct 2019
Bootstrapping deep music separation from primitive auditory grouping
  principles
Bootstrapping deep music separation from primitive auditory grouping principles
Prem Seetharaman
Mojtaba Sahraee-Ardakan
Jonathan Le Roux
S. Rangan
16
6
0
23 Oct 2019
Filterbank design for end-to-end speech separation
Filterbank design for end-to-end speech separation
Manuel Pariente
Samuele Cornell
Antoine Deleforge
Emmanuel Vincent
26
69
0
23 Oct 2019
End-to-End Multi-Task Denoising for the Joint Optimization of Perceptual
  Speech Metrics
End-to-End Multi-Task Denoising for the Joint Optimization of Perceptual Speech Metrics
Jaeyoung Kim
Mostafa El-Khamy
Jungwon Lee
22
3
0
23 Oct 2019
WHAMR!: Noisy and Reverberant Single-Channel Speech Separation
WHAMR!: Noisy and Reverberant Single-Channel Speech Separation
Matthew Maciejewski
Gordon Wichern
E. McQuinn
Jonathan Le Roux
22
180
0
22 Oct 2019
Two-Step Sound Source Separation: Training on Learned Latent Targets
Two-Step Sound Source Separation: Training on Learned Latent Targets
Efthymios Tzinis
Shrikant Venkataramani
Zhepei Wang
Y. C. Sübakan
Paris Smaragdis
26
64
0
22 Oct 2019
Simultaneous Separation and Transcription of Mixtures with Multiple
  Polyphonic and Percussive Instruments
Simultaneous Separation and Transcription of Mixtures with Multiple Polyphonic and Percussive Instruments
Ethan Manilow
Prem Seetharaman
Bryan Pardo
17
42
0
22 Oct 2019
Comparative Study between Adversarial Networks and Classical Techniques
  for Speech Enhancement
Comparative Study between Adversarial Networks and Classical Techniques for Speech Enhancement
Tito Spadini
R. Suyama
AAML
16
1
0
21 Oct 2019
MIMO-SPEECH: End-to-End Multi-Channel Multi-Speaker Speech Recognition
MIMO-SPEECH: End-to-End Multi-Channel Multi-Speaker Speech Recognition
Xuankai Chang
Wangyou Zhang
Y. Qian
Jonathan Le Roux
Shinji Watanabe
14
116
0
15 Oct 2019
Dual-path RNN: efficient long sequence modeling for time-domain
  single-channel speech separation
Dual-path RNN: efficient long sequence modeling for time-domain single-channel speech separation
Yi Luo
Zhuo Chen
Takuya Yoshioka
AI4TS
42
759
0
14 Oct 2019
FaSNet: Low-latency Adaptive Beamforming for Multi-microphone Audio
  Processing
FaSNet: Low-latency Adaptive Beamforming for Multi-microphone Audio Processing
Yi Luo
Enea Ceolini
Cong Han
Shih-Chii Liu
N. Mesgarani
25
147
0
29 Sep 2019
CochleaNet: A Robust Language-independent Audio-Visual Model for Speech
  Enhancement
CochleaNet: A Robust Language-independent Audio-Visual Model for Speech Enhancement
M. Gogate
K. Dashtipour
Ahsan Adeel
Amir Hussain
23
53
0
23 Sep 2019
Cutting Music Source Separation Some Slakh: A Dataset to Study the
  Impact of Training Data Quality and Quantity
Cutting Music Source Separation Some Slakh: A Dataset to Study the Impact of Training Data Quality and Quantity
Ethan Manilow
Gordon Wichern
Prem Seetharaman
Jonathan Le Roux
57
123
0
18 Sep 2019
Previous
123...111213
Next