ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.01160
  4. Cited By
WHAM!: Extending Speech Separation to Noisy Environments

WHAM!: Extending Speech Separation to Noisy Environments

2 July 2019
Gordon Wichern
J. Antognini
Michael Flynn
Licheng Richard Zhu
E. McQuinn
Dwight Crow
Ethan Manilow
Jonathan Le Roux
ArXivPDFHTML

Papers citing "WHAM!: Extending Speech Separation to Noisy Environments"

50 / 60 papers shown
Title
TS-SUPERB: A Target Speech Processing Benchmark for Speech Self-Supervised Learning Models
TS-SUPERB: A Target Speech Processing Benchmark for Speech Self-Supervised Learning Models
Junyi Peng
Takanori Ashihara
Marc Delcroix
Tsubasa Ochiai
Oldrich Plchot
Shoko Araki
J. Černocký
ELM
29
0
0
10 May 2025
Listen to Extract: Onset-Prompted Target Speaker Extraction
Listen to Extract: Onset-Prompted Target Speaker Extraction
Pengjie Shen
Kangrui Chen
Shulin He
Pengru Chen
Shuqi Yuan
He Kong
Xueliang Zhang
Zehao Wang
53
0
0
08 May 2025
ReverbMiipher: Generative Speech Restoration meets Reverberation Characteristics Controllability
ReverbMiipher: Generative Speech Restoration meets Reverberation Characteristics Controllability
Wataru Nakata
Yuma Koizumi
Shigeki Karita
Robin Scheibler
Haruko Ishikawa
Adriana Guevara-Rukoz
Heiga Zen
M. Bacchiani
53
0
0
08 May 2025
MaskClip: Detachable Clip-on Piezoelectric Sensing of Mask Surface Vibrations for Real-time Noise-Robust Speech Input
MaskClip: Detachable Clip-on Piezoelectric Sensing of Mask Surface Vibrations for Real-time Noise-Robust Speech Input
Hirotaka Hiraki
Jun Rekimoto
19
0
0
04 May 2025
A Comparative Study on Positional Encoding for Time-frequency Domain Dual-path Transformer-based Source Separation Models
A Comparative Study on Positional Encoding for Time-frequency Domain Dual-path Transformer-based Source Separation Models
Kohei Saijo
Tetsuji Ogawa
52
1
0
28 Apr 2025
End-to-End Target Speaker Speech Recognition Using Context-Aware Attention Mechanisms for Challenging Enrollment Scenario
Mohsen Ghane
Mohammad Sadegh Safari
76
0
0
28 Jan 2025
USED: Universal Speaker Extraction and Diarization
USED: Universal Speaker Extraction and Diarization
Junyi Ao
Mehmet Sinan Yildirim
Ruijie Tao
Mengyao Ge
Shuai Wang
Yan-min Qian
Haizhou Li
41
6
0
17 Jan 2025
SonicSim: A customizable simulation platform for speech processing in moving sound source scenarios
SonicSim: A customizable simulation platform for speech processing in moving sound source scenarios
Kai Li
Wendi Sang
Chang Zeng
Runxuan Yang
Guo Chen
Xiaolin Hu
39
2
0
02 Oct 2024
TIGER: Time-frequency Interleaved Gain Extraction and Reconstruction for Efficient Speech Separation
TIGER: Time-frequency Interleaved Gain Extraction and Reconstruction for Efficient Speech Separation
Mohan Xu
Kai Li
Guo Chen
Xiaolin Hu
48
0
0
02 Oct 2024
Learning Source Disentanglement in Neural Audio Codec
Learning Source Disentanglement in Neural Audio Codec
Xiaoyu Bie
Xubo Liu
Gaël Richard
29
1
0
17 Sep 2024
USEF-TSE: Universal Speaker Embedding Free Target Speaker Extraction
USEF-TSE: Universal Speaker Embedding Free Target Speaker Extraction
Bang Zeng
Ming Li
37
2
0
04 Sep 2024
Serialized Speech Information Guidance with Overlapped Encoding
  Separation for Multi-Speaker Automatic Speech Recognition
Serialized Speech Information Guidance with Overlapped Encoding Separation for Multi-Speaker Automatic Speech Recognition
Hao Shi
Yuan Gao
Zhaoheng Ni
Tatsuya Kawahara
32
2
0
01 Sep 2024
Advancing Multi-talker ASR Performance with Large Language Models
Advancing Multi-talker ASR Performance with Large Language Models
Mohan Shi
Zengrui Jin
Yaoxun Xu
Yong Xu
Shi-Xiong Zhang
Kun Wei
Yiwen Shao
Chunlei Zhang
Dong Yu
31
1
0
30 Aug 2024
How Should We Extract Discrete Audio Tokens from Self-Supervised Models?
How Should We Extract Discrete Audio Tokens from Self-Supervised Models?
Pooneh Mousavi
J. Duret
Salah Zaiem
Luca Della Libera
Artem Ploujnikov
Cem Subakan
Mirco Ravanelli
42
9
0
15 Jun 2024
Effects of Dataset Sampling Rate for Noise Cancellation through Deep
  Learning
Effects of Dataset Sampling Rate for Noise Cancellation through Deep Learning
Brandon Colelough
Andrew Zheng
26
1
0
30 May 2024
A Large-Scale Evaluation of Speech Foundation Models
A Large-Scale Evaluation of Speech Foundation Models
Shu-Wen Yang
Heng-Jui Chang
Zili Huang
Andy T. Liu
Cheng-I Jeff Lai
...
Kushal Lakhotia
Shang-Wen Li
Abdelrahman Mohamed
Shinji Watanabe
Hung-yi Lee
38
19
0
15 Apr 2024
ConSep: a Noise- and Reverberation-Robust Speech Separation Framework by
  Magnitude Conditioning
ConSep: a Noise- and Reverberation-Robust Speech Separation Framework by Magnitude Conditioning
Kuan-Hsun Ho
J. Hung
Berlin Chen
42
0
0
04 Mar 2024
MossFormer2: Combining Transformer and RNN-Free Recurrent Network for
  Enhanced Time-Domain Monaural Speech Separation
MossFormer2: Combining Transformer and RNN-Free Recurrent Network for Enhanced Time-Domain Monaural Speech Separation
Shengkui Zhao
Yukun Ma
Chongjia Ni
Chong Zhang
Hao Wang
Trung Hieu Nguyen
Kun Zhou
J. Yip
Dianwen Ng
Bin Ma
36
22
0
19 Dec 2023
FAT-HuBERT: Front-end Adaptive Training of Hidden-unit BERT for
  Distortion-Invariant Robust Speech Recognition
FAT-HuBERT: Front-end Adaptive Training of Hidden-unit BERT for Distortion-Invariant Robust Speech Recognition
Dongning Yang
Wei Wang
Yanmin Qian
13
3
0
29 Nov 2023
An objective evaluation of Hearing Aids and DNN-based speech enhancement
  in complex acoustic scenes
An objective evaluation of Hearing Aids and DNN-based speech enhancement in complex acoustic scenes
Enric Gusó
Joanna Luberadzka
Martí Baig
Umut Sayin Saraç
Xavier Serra
18
2
0
24 Jul 2023
An Efficient Speech Separation Network Based on Recurrent Fusion Dilated
  Convolution and Channel Attention
An Efficient Speech Separation Network Based on Recurrent Fusion Dilated Convolution and Channel Attention
Junyu Wang
22
1
0
09 Jun 2023
A two-stage speaker extraction algorithm under adverse acoustic
  conditions using a single-microphone
A two-stage speaker extraction algorithm under adverse acoustic conditions using a single-microphone
Aviad Eisenberg
Sharon Gannot
Shlomo E. Chazan
24
2
0
13 Mar 2023
Multi-Dimensional and Multi-Scale Modeling for Speech Separation
  Optimized by Discriminative Learning
Multi-Dimensional and Multi-Scale Modeling for Speech Separation Optimized by Discriminative Learning
Zhaoxi Mu
Xinyu Yang
Wenjing Zhu
31
5
0
07 Mar 2023
DFSNet: A Steerable Neural Beamformer Invariant to Microphone Array
  Configuration for Real-Time, Low-Latency Speech Enhancement
DFSNet: A Steerable Neural Beamformer Invariant to Microphone Array Configuration for Real-Time, Low-Latency Speech Enhancement
A. Kovalyov
Kashyap Patel
Issa Panahi
31
3
0
26 Feb 2023
Voice conversion with limited data and limitless data augmentations
Voice conversion with limited data and limitless data augmentations
Olga Slizovskaia
Jordi Janer
Pritish Chandna
Oscar Mayor
30
1
0
27 Dec 2022
Audio Denoising for Robust Audio Fingerprinting
Audio Denoising for Robust Audio Fingerprinting
Kamil Akesbi
21
3
0
21 Dec 2022
Tackling the Cocktail Fork Problem for Separation and Transcription of
  Real-World Soundtracks
Tackling the Cocktail Fork Problem for Separation and Transcription of Real-World Soundtracks
Darius Petermann
Gordon Wichern
Aswin Shanmugam Subramanian
Zhong-Qiu Wang
Jonathan Le Roux
27
10
0
14 Dec 2022
Latent Iterative Refinement for Modular Source Separation
Latent Iterative Refinement for Modular Source Separation
Dimitrios Bralios
Efthymios Tzinis
Gordon Wichern
Paris Smaragdis
Jonathan Le Roux
BDL
33
5
0
22 Nov 2022
Improving generalizability of distilled self-supervised speech
  processing models under distorted settings
Improving generalizability of distilled self-supervised speech processing models under distorted settings
Kuan-Po Huang
Yu-Kuan Fu
Tsung-Yuan Hsu
Fabian Ritter-Gutierrez
Fan Wang
Liang-Hsuan Tseng
Yu Zhang
Hung-yi Lee
32
14
0
14 Oct 2022
ClearBuds: Wireless Binaural Earbuds for Learning-Based Speech
  Enhancement
ClearBuds: Wireless Binaural Earbuds for Learning-Based Speech Enhancement
Ishan Chatterjee
Maruchi Kim
V. Jayaram
Shyamnath Gollakota
Ira Kemelmacher-Shlizerman
Shwetak N. Patel
S. M. Seitz
21
25
0
27 Jun 2022
Speaker Reinforcement Using Target Source Extraction for Robust
  Automatic Speech Recognition
Speaker Reinforcement Using Target Source Extraction for Robust Automatic Speech Recognition
Catalin Zorila
R. Doddipatla
24
11
0
09 May 2022
Efficient dynamic filter for robust and low computational feature
  extraction
Efficient dynamic filter for robust and low computational feature extraction
Donghyeon Kim
Gwantae Kim
Bokyeung Lee
Jeong-gi Kwak
D. Han
Hanseok Ko
31
3
0
03 May 2022
RadioSES: mmWave-Based Audioradio Speech Enhancement and Separation
  System
RadioSES: mmWave-Based Audioradio Speech Enhancement and Separation System
M. Z. Ozturk
Chenshu Wu
Beibei Wang
Min Wu
K. Liu
27
20
0
14 Apr 2022
Neural Network-augmented Kalman Filtering for Robust Online Speech
  Dereverberation in Noisy Reverberant Environments
Neural Network-augmented Kalman Filtering for Robust Online Speech Dereverberation in Noisy Reverberant Environments
Jean-Marie Lemercier
J. Thiemann
Raphael Koning
Timo Gerkmann
11
2
0
06 Apr 2022
EEND-SS: Joint End-to-End Neural Speaker Diarization and Speech
  Separation for Flexible Number of Speakers
EEND-SS: Joint End-to-End Neural Speaker Diarization and Speech Separation for Flexible Number of Speakers
Soumi Maiti
Yushi Ueda
Shinji Watanabe
Chunlei Zhang
Meng Yu
Shi-Xiong Zhang
Yong-mei Xu
39
32
0
31 Mar 2022
A Hybrid Continuity Loss to Reduce Over-Suppression for Time-domain
  Target Speaker Extraction
A Hybrid Continuity Loss to Reduce Over-Suppression for Time-domain Target Speaker Extraction
Zexu Pan
Meng Ge
Haizhou Li
21
17
0
31 Mar 2022
Improving Source Separation by Explicitly Modeling Dependencies Between
  Sources
Improving Source Separation by Explicitly Modeling Dependencies Between Sources
Ethan Manilow
Curtis Hawthorne
Cheng-Zhi Anna Huang
Bryan Pardo
Jesse Engel
BDL
28
7
0
28 Mar 2022
SUPERB-SG: Enhanced Speech processing Universal PERformance Benchmark
  for Semantic and Generative Capabilities
SUPERB-SG: Enhanced Speech processing Universal PERformance Benchmark for Semantic and Generative Capabilities
Hsiang-Sheng Tsai
Heng-Jui Chang
Wen-Chin Huang
Zili Huang
Kushal Lakhotia
...
Hsuan-Jui Chen
Shang-Wen Li
Shinji Watanabe
Abdel-rahman Mohamed
Hung-yi Lee
26
109
0
14 Mar 2022
CNN self-attention voice activity detector
CNN self-attention voice activity detector
Amit Sofer
Shlomo E. Chazan
21
8
0
06 Mar 2022
Single microphone speaker extraction using unified time-frequency
  Siamese-Unet
Single microphone speaker extraction using unified time-frequency Siamese-Unet
Aviad Eisenberg
Sharon Gannot
Shlomo E. Chazan
30
3
0
06 Mar 2022
ESPnet-SLU: Advancing Spoken Language Understanding through ESPnet
ESPnet-SLU: Advancing Spoken Language Understanding through ESPnet
Siddhant Arora
Siddharth Dalmia
Pavel Denisov
Xuankai Chang
Yushi Ueda
...
Karthik Ganesan
Brian Yan
Ngoc Thang Vu
A. Black
Shinji Watanabe
VLM
33
74
0
29 Nov 2021
LightSAFT: Lightweight Latent Source Aware Frequency Transform for
  Source Separation
LightSAFT: Lightweight Latent Source Aware Frequency Transform for Source Separation
Yeong-Seok Jeong
Jinsung Kim
Woosung Choi
Jaehwa Chung
Soonyoung Jung
39
2
0
24 Nov 2021
Reduction of Subjective Listening Effort for TV Broadcast Signals with
  Recurrent Neural Networks
Reduction of Subjective Listening Effort for TV Broadcast Signals with Recurrent Neural Networks
Nils L. Westhausen
R. Huber
Hannah Baumgartner
Ragini Sinha
J. Rennies
B. Meyer
27
10
0
02 Nov 2021
Closing the Gap Between Time-Domain Multi-Channel Speech Enhancement on
  Real and Simulation Conditions
Closing the Gap Between Time-Domain Multi-Channel Speech Enhancement on Real and Simulation Conditions
Wangyou Zhang
Jing Shi
Chenda Li
Shinji Watanabe
Y. Qian
36
22
0
27 Oct 2021
Continual self-training with bootstrapped remixing for speech
  enhancement
Continual self-training with bootstrapped remixing for speech enhancement
Efthymios Tzinis
Yossi Adi
V. Ithapu
Buye Xu
Anurag Kumar
26
16
0
19 Oct 2021
The Cocktail Fork Problem: Three-Stem Audio Separation for Real-World
  Soundtracks
The Cocktail Fork Problem: Three-Stem Audio Separation for Real-World Soundtracks
Darius Petermann
Gordon Wichern
Zhong-Qiu Wang
Jonathan Le Roux
23
37
0
19 Oct 2021
Toward Degradation-Robust Voice Conversion
Toward Degradation-Robust Voice Conversion
Chien-yu Huang
Kai-Wei Chang
Hung-yi Lee
30
7
0
14 Oct 2021
TRUNet: Transformer-Recurrent-U Network for Multi-channel Reverberant
  Sound Source Separation
TRUNet: Transformer-Recurrent-U Network for Multi-channel Reverberant Sound Source Separation
Ali Aroudi
Stefan Uhlich
M. Font
ViT
27
5
0
08 Oct 2021
USEV: Universal Speaker Extraction with Visual Cue
USEV: Universal Speaker Extraction with Visual Cue
Zexu Pan
Meng Ge
Haizhou Li
34
41
0
30 Sep 2021
Separate but Together: Unsupervised Federated Learning for Speech
  Enhancement from Non-IID Data
Separate but Together: Unsupervised Federated Learning for Speech Enhancement from Non-IID Data
Efthymios Tzinis
Jonah Casebeer
Zhepei Wang
Paris Smaragdis
FedML
32
19
0
11 May 2021
12
Next