ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.10587
  4. Cited By
On Batching Variable Size Inputs for Training End-to-End Speech
  Enhancement Systems
v1v2 (latest)

On Batching Variable Size Inputs for Training End-to-End Speech Enhancement Systems

25 January 2023
Philippe Gonzalez
T. S. Alstrøm
Tobias May
ArXiv (abs)PDFHTML

Papers citing "On Batching Variable Size Inputs for Training End-to-End Speech Enhancement Systems"

14 / 14 papers shown
Title
TPARN: Triple-path Attentive Recurrent Network for Time-domain
  Multichannel Speech Enhancement
TPARN: Triple-path Attentive Recurrent Network for Time-domain Multichannel Speech Enhancement
Ashutosh Pandey
Buye Xu
Anurag Kumar
Jacob Donley
P. Calamia
DeLiang Wang
KELM
79
42
0
20 Oct 2021
DCCRN+: Channel-wise Subband DCCRN with SNR Estimation for Speech
  Enhancement
DCCRN+: Channel-wise Subband DCCRN with SNR Estimation for Speech Enhancement
Shubo Lv
Yanxin Hu
Shimin Zhang
Lei Xie
48
93
0
16 Jun 2021
Exploring the Best Loss Function for DNN-Based Low-latency Speech
  Enhancement with Temporal Convolutional Networks
Exploring the Best Loss Function for DNN-Based Low-latency Speech Enhancement with Temporal Convolutional Networks
Yuichiro Koyama
Tyler Vuong
Stefan Uhlich
Bhiksha Raj
60
41
0
23 May 2020
SDR - half-baked or well done?
SDR - half-baked or well done?
F. Sánchez-Martínez
M. Esplà-Gomis
Hakan Erdogan
J. Hershey
148
1,202
0
06 Nov 2018
Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for
  Speech Separation
Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation
Yi Luo
N. Mesgarani
159
1,794
0
20 Sep 2018
Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram
  Predictions
Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions
Jonathan Shen
Ruoming Pang
Ron J. Weiss
M. Schuster
Navdeep Jaitly
...
Yuxuan Wang
RJ Skerry-Ryan
Rif A. Saurous
Yannis Agiomyrgiannakis
Yonghui Wu
79
2,701
0
16 Dec 2017
Convergence Analysis of Distributed Stochastic Gradient Descent with
  Shuffling
Convergence Analysis of Distributed Stochastic Gradient Descent with Shuffling
Qi Meng
Wei-neng Chen
Yue Wang
Zhi-Ming Ma
Tie-Yan Liu
FedML
47
102
0
29 Sep 2017
Supervised Speech Separation Based on Deep Learning: An Overview
Supervised Speech Separation Based on Deep Learning: An Overview
DeLiang Wang
Jitong Chen
SSL
77
1,373
0
24 Aug 2017
Accelerating recurrent neural network training using sequence bucketing
  and multi-GPU data parallelization
Accelerating recurrent neural network training using sequence bucketing and multi-GPU data parallelization
Viacheslav Khomenko
Oleg Shyshkov
Olga Radyvonenko
Kostiantyn Bokhan
47
65
0
18 Aug 2017
An Empirical Study of Mini-Batch Creation Strategies for Neural Machine
  Translation
An Empirical Study of Mini-Batch Creation Strategies for Neural Machine Translation
Makoto Morishita
Yusuke Oda
Graham Neubig
Koichiro Yoshino
Katsuhito Sudoh
Satoshi Nakamura
MoE
51
26
0
19 Jun 2017
A comprehensive study of batch construction strategies for recurrent
  neural networks in MXNet
A comprehensive study of batch construction strategies for recurrent neural networks in MXNet
P. Doetsch
Pavel Golik
Hermann Ney
57
17
0
05 May 2017
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
427
2,945
0
15 Sep 2016
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.9K
150,260
0
22 Dec 2014
Practical recommendations for gradient-based training of deep
  architectures
Practical recommendations for gradient-based training of deep architectures
Yoshua Bengio
3DHODL
193
2,201
0
24 Jun 2012
1