ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.13709
  4. Cited By
SpeechRefiner: Towards Perceptual Quality Refinement for Front-End Algorithms

SpeechRefiner: Towards Perceptual Quality Refinement for Front-End Algorithms

16 June 2025
Sirui Li
Shuai Wang
Zhijun Liu
Zhongjie Jiang
Yannan Wang
Haizhou Li
ArXiv (abs)PDFHTML

Papers citing "SpeechRefiner: Towards Perceptual Quality Refinement for Front-End Algorithms"

28 / 28 papers shown
Title
RaD-Net 2: A causal two-stage repairing and denoising speech enhancement
  network with knowledge distillation and complex axial self-attention
RaD-Net 2: A causal two-stage repairing and denoising speech enhancement network with knowledge distillation and complex axial self-attention
Mingshuai Liu
Zhuangqi Chen
Xiaopeng Yan
Yuanjun Lv
Xianjun Xia
Chuanzeng Huang
Yijian Xiao
Lei Xie
76
4
0
11 Jun 2024
ICASSP 2024 Speech Signal Improvement Challenge
ICASSP 2024 Speech Signal Improvement Challenge
Nicolae-Cătălin Ristea
Ando Saabas
Ross Cutler
Babak Naderi
Sebastian Braun
Solomiya Branets
80
23
0
25 Jan 2024
RaD-Net: A Repairing and Denoising Network for Speech Signal Improvement
RaD-Net: A Repairing and Denoising Network for Speech Signal Improvement
Mingshuai Liu
Zhuangqi Chen
Xiaopeng Yan
Yuanjun Lv
Xianjun Xia
Chuanzeng Huang
Yijian Xiao
Lei Xie
82
5
0
09 Jan 2024
AutoPrep: An Automatic Preprocessing Framework for In-the-Wild Speech
  Data
AutoPrep: An Automatic Preprocessing Framework for In-the-Wild Speech Data
Jianwei Yu
Hangting Chen
Yanyao Bian
Xiang Li
Yimin Luo
Jinchuan Tian
Mengyang Liu
Jiayi Jiang
Shuai Wang
VLM
55
16
0
25 Sep 2023
Vocos: Closing the gap between time-domain and Fourier-based neural
  vocoders for high-quality audio synthesis
Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis
Hubert Siuzdak
85
102
0
01 Jun 2023
LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus
LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus
Yuma Koizumi
Heiga Zen
Shigeki Karita
Yifan Ding
Kohei Yatabe
Nobuyuki Morioka
M. Bacchiani
Yu Zhang
Wei Han
Ankur Bapna
93
79
0
30 May 2023
Diffusion-based Signal Refiner for Speech Separation
Diffusion-based Signal Refiner for Speech Separation
M. Hirano
Kazuki Shimada
Yuichiro Koyama
Shusuke Takahashi
Yuki Mitsufuji
DiffM
66
8
0
10 May 2023
Miipher: A Robust Speech Restoration Model Integrating Self-Supervised
  Speech and Text Representations
Miipher: A Robust Speech Restoration Model Integrating Self-Supervised Speech and Text Representations
Yuma Koizumi
Heiga Zen
Shigeki Karita
Yifan Ding
Kohei Yatabe
Nobuyuki Morioka
Yu Zhang
Wei Han
Ankur Bapna
M. Bacchiani
71
29
0
03 Mar 2023
Neural Target Speech Extraction: An Overview
Neural Target Speech Extraction: An Overview
Kateřina Žmolíková
Marc Delcroix
Tsubasa Ochiai
K. Kinoshita
JanHonza'' vCernocký
Dong Yu
66
95
0
31 Jan 2023
StoRM: A Diffusion-based Stochastic Regeneration Model for Speech
  Enhancement and Dereverberation
StoRM: A Diffusion-based Stochastic Regeneration Model for Speech Enhancement and Dereverberation
Jean-Marie Lemercier
Julius Richter
Simon Welker
Timo Gerkmann
DiffM
216
91
0
22 Dec 2022
Diffiner: A Versatile Diffusion-based Generative Refiner for Speech
  Enhancement
Diffiner: A Versatile Diffusion-based Generative Refiner for Speech Enhancement
Ryosuke Sawata
Naoki Murata
Yuhta Takida
Toshimitsu Uesaka
Takashi Shibuya
Shusuke Takahashi
Yuki Mitsufuji
DiffM
80
17
0
27 Oct 2022
Flow Matching for Generative Modeling
Flow Matching for Generative Modeling
Y. Lipman
Ricky T. Q. Chen
Heli Ben-Hamu
Maximilian Nickel
Matt Le
OOD
209
1,373
0
06 Oct 2022
Speech Enhancement and Dereverberation with Diffusion-based Generative
  Models
Speech Enhancement and Dereverberation with Diffusion-based Generative Models
Julius Richter
Simon Welker
Jean-Marie Lemercier
Bunlong Lay
Timo Gerkmann
DiffM
74
206
0
11 Aug 2022
VoiceFixer: A Unified Framework for High-Fidelity Speech Restoration
VoiceFixer: A Unified Framework for High-Fidelity Speech Restoration
Haohe Liu
Xubo Liu
Qiuqiang Kong
Qiao Tian
Yan Zhao
DeLiang Wang
Chuanzeng Huang
Yuxuan Wang
53
59
0
12 Apr 2022
Conditional Diffusion Probabilistic Model for Speech Enhancement
Conditional Diffusion Probabilistic Model for Speech Enhancement
Yen-Ju Lu
Zhongqiu Wang
Shinji Watanabe
Alexander Richard
Cheng Yu
Yu Tsao
DiffM
52
190
0
10 Feb 2022
Train Short, Test Long: Attention with Linear Biases Enables Input
  Length Extrapolation
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
Ofir Press
Noah A. Smith
M. Lewis
336
775
0
27 Aug 2021
Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech
Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech
Vadim Popov
Ivan Vovk
Vladimir Gogoryan
Tasnima Sadekova
Mikhail Kudinov
DiffM
106
541
0
13 May 2021
RoFormer: Enhanced Transformer with Rotary Position Embedding
RoFormer: Enhanced Transformer with Rotary Position Embedding
Jianlin Su
Yu Lu
Shengfeng Pan
Ahmed Murtadha
Bo Wen
Yunfeng Liu
284
2,521
0
20 Apr 2021
DNSMOS: A Non-Intrusive Perceptual Objective Speech Quality metric to
  evaluate Noise Suppressors
DNSMOS: A Non-Intrusive Perceptual Objective Speech Quality metric to evaluate Noise Suppressors
Chandan K. A. Reddy
Vishak Gopal
Ross Cutler
77
313
0
28 Oct 2020
Attention is All You Need in Speech Separation
Attention is All You Need in Speech Separation
Cem Subakan
Mirco Ravanelli
Samuele Cornell
Mirko Bronzi
Jianyuan Zhong
97
562
0
25 Oct 2020
Muse: Multi-modal target speaker extraction with visual cues
Muse: Multi-modal target speaker extraction with visual cues
Zexu Pan
Ruijie Tao
Chenglin Xu
Haizhou Li
45
50
0
15 Oct 2020
Conformer: Convolution-augmented Transformer for Speech Recognition
Conformer: Convolution-augmented Transformer for Speech Recognition
Anmol Gulati
James Qin
Chung-Cheng Chiu
Niki Parmar
Yu Zhang
...
Wei Han
Shibo Wang
Zhengdong Zhang
Yonghui Wu
Ruoming Pang
229
3,155
0
16 May 2020
SpEx+: A Complete Time Domain Speaker Extraction Network
SpEx+: A Complete Time Domain Speaker Extraction Network
Meng Ge
Chenglin Xu
Longbiao Wang
Chng Eng Siong
Jianwu Dang
Haizhou Li
65
149
0
10 May 2020
A Review of Multi-Objective Deep Learning Speech Denoising Methods
A Review of Multi-Objective Deep Learning Speech Denoising Methods
A. Azarang
N. Kehtarnavaz
73
32
0
26 Mar 2020
SDR - half-baked or well done?
SDR - half-baked or well done?
F. Sánchez-Martínez
M. Esplà-Gomis
Hakan Erdogan
J. Hershey
153
1,204
0
06 Nov 2018
Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for
  Speech Separation
Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation
Yi Luo
N. Mesgarani
162
1,795
0
20 Sep 2018
TasNet: time-domain audio separation network for real-time,
  single-channel speech separation
TasNet: time-domain audio separation network for real-time, single-channel speech separation
Yi Luo
N. Mesgarani
73
632
0
01 Nov 2017
Supervised Speech Separation Based on Deep Learning: An Overview
Supervised Speech Separation Based on Deep Learning: An Overview
DeLiang Wang
Jitong Chen
SSL
81
1,374
0
24 Aug 2017
1