ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.08144
  4. Cited By
Two-Step Knowledge Distillation for Tiny Speech Enhancement

Two-Step Knowledge Distillation for Tiny Speech Enhancement

15 September 2023
Rayan Daod Nathoo
M. Kegler
Marko Stamenovic
ArXiv (abs)PDFHTML

Papers citing "Two-Step Knowledge Distillation for Tiny Speech Enhancement"

13 / 13 papers shown
Title
Categories of Response-Based, Feature-Based, and Relation-Based
  Knowledge Distillation
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation
Chuanguang Yang
Xinqiang Yu
Zhulin An
Yongjun Xu
VLMOffRL
161
26
0
19 Jun 2023
Inter-KD: Intermediate Knowledge Distillation for CTC-Based Automatic
  Speech Recognition
Inter-KD: Intermediate Knowledge Distillation for CTC-Based Automatic Speech Recognition
J. Yoon
Beom Jun Woo
Sunghwan Ahn
Hyeon Seung Lee
N. Kim
VLM
61
9
0
28 Nov 2022
DNSMOS P.835: A Non-Intrusive Perceptual Objective Speech Quality Metric
  to Evaluate Noise Suppressors
DNSMOS P.835: A Non-Intrusive Perceptual Objective Speech Quality Metric to Evaluate Noise Suppressors
Chandan K. A. Reddy
Vishak Gopal
Ross Cutler
83
226
0
05 Oct 2021
Distilling Knowledge via Knowledge Review
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu Liu
Hengshuang Zhao
Jiaya Jia
189
446
0
19 Apr 2021
Towards efficient models for real-time deep noise suppression
Towards efficient models for real-time deep noise suppression
Sebastian Braun
H. Gamper
Chandan K. A. Reddy
I. Tashev
55
110
0
22 Jan 2021
TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids
TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids
Igor Fedorov
Marko Stamenovic
Carl R. Jensen
Li-Chia Yang
Ari Mandell
Yiming Gan
Matthew Mattina
P. Whatmough
50
98
0
20 May 2020
The INTERSPEECH 2020 Deep Noise Suppression Challenge: Datasets,
  Subjective Testing Framework, and Challenge Results
The INTERSPEECH 2020 Deep Noise Suppression Challenge: Datasets, Subjective Testing Framework, and Challenge Results
Chandan K. A. Reddy
Vishak Gopal
Ross Cutler
Ebrahim Beyrami
R. Cheng
...
A. Aazami
Sebastian Braun
Puneet Rana
Sriram Srinivasan
J. Gehrke
94
318
0
16 May 2020
Similarity-Preserving Knowledge Distillation
Similarity-Preserving Knowledge Distillation
Frederick Tung
Greg Mori
126
979
0
23 Jul 2019
Similarity of Neural Network Representations Revisited
Similarity of Neural Network Representations Revisited
Simon Kornblith
Mohammad Norouzi
Honglak Lee
Geoffrey E. Hinton
143
1,431
0
01 May 2019
SDR - half-baked or well done?
SDR - half-baked or well done?
F. Sánchez-Martínez
M. Esplà-Gomis
Hakan Erdogan
J. Hershey
153
1,204
0
06 Nov 2018
Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for
  Speech Separation
Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation
Yi Luo
N. Mesgarani
162
1,795
0
20 Sep 2018
Paying More Attention to Attention: Improving the Performance of
  Convolutional Neural Networks via Attention Transfer
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
Sergey Zagoruyko
N. Komodakis
147
2,586
0
12 Dec 2016
FitNets: Hints for Thin Deep Nets
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
316
3,898
0
19 Dec 2014
1