ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.05712
  4. Cited By
Listen to What You Want: Neural Network-based Universal Sound Selector

Listen to What You Want: Neural Network-based Universal Sound Selector

10 June 2020
Tsubasa Ochiai
Marc Delcroix
Yuma Koizumi
Hiroaki Ito
K. Kinoshita
S. Araki
ArXivPDFHTML

Papers citing "Listen to What You Want: Neural Network-based Universal Sound Selector"

43 / 43 papers shown
Title
Unleashing the Power of Natural Audio Featuring Multiple Sound Sources
Unleashing the Power of Natural Audio Featuring Multiple Sound Sources
Xize Cheng
Slytherin Wang
Zehan Wang
Rongjie Huang
Tao Jin
Zhou Zhao
42
0
0
24 Apr 2025
End-to-End Target Speaker Speech Recognition Using Context-Aware Attention Mechanisms for Challenging Enrollment Scenario
Mohsen Ghane
Mohammad Sadegh Safari
73
0
0
28 Jan 2025
30+ Years of Source Separation Research: Achievements and Future Challenges
30+ Years of Source Separation Research: Achievements and Future Challenges
S. Araki
N. Ito
Reinhold Haeb-Umbach
G. Wichern
Zhong-Qiu Wang
Yuki Mitsufuji
AI4TS
39
0
0
21 Jan 2025
FlowSep: Language-Queried Sound Separation with Rectified Flow Matching
FlowSep: Language-Queried Sound Separation with Rectified Flow Matching
Yi Yuan
Xubo Liu
Haohe Liu
Mark D. Plumbley
Wenwu Wang
52
3
0
10 Jan 2025
Task-Aware Unified Source Separation
Task-Aware Unified Source Separation
Kohei Saijo
Janek Ebbers
François Germain
G. Wichern
Jonathan Le Roux
42
2
0
31 Oct 2024
OmniSep: Unified Omni-Modality Sound Separation with Query-Mixup
OmniSep: Unified Omni-Modality Sound Separation with Query-Mixup
Xize Cheng
Siqi Zheng
Zehan Wang
Minghui Fang
Ziang Zhang
...
Z. Ma
Shengpeng Ji
Jialong Zuo
Tao Jin
Zhou Zhao
30
1
0
28 Oct 2024
Leveraging Audio-Only Data for Text-Queried Target Sound Extraction
Leveraging Audio-Only Data for Text-Queried Target Sound Extraction
Kohei Saijo
Janek Ebbers
François Germain
Sameer Khurana
G. Wichern
Jonathan Le Roux
44
1
0
20 Sep 2024
SoundBeam meets M2D: Target Sound Extraction with Audio Foundation Model
SoundBeam meets M2D: Target Sound Extraction with Audio Foundation Model
Carlos Hernandez-Olivan
Marc Delcroix
Tsubasa Ochiai
Daisuke Niizumi
Naohiro Tawara
Tomohiro Nakatani
Shoko Araki
34
2
0
19 Sep 2024
Multichannel-to-Multichannel Target Sound Extraction Using Direction and
  Timestamp Clues
Multichannel-to-Multichannel Target Sound Extraction Using Direction and Timestamp Clues
Dayun Choi
Jung-Woo Choi
37
0
0
19 Sep 2024
Language-Queried Target Sound Extraction Without Parallel Training Data
Language-Queried Target Sound Extraction Without Parallel Training Data
Hao Ma
Zhiyuan Peng
Xu Li
Yukai Li
Mingjie Shao
Qiuqiang Kong
Ju Liu
VLM
74
1
0
14 Sep 2024
Interaural time difference loss for binaural target sound extraction
Interaural time difference loss for binaural target sound extraction
Carlos Hernandez-Olivan
Marc Delcroix
Tsubasa Ochiai
Naohiro Tawara
Tomohiro Nakatani
Shoko Araki
21
1
0
01 Aug 2024
CATSE: A Context-Aware Framework for Causal Target Sound Extraction
CATSE: A Context-Aware Framework for Causal Target Sound Extraction
Shrishail Baligar
M. Kegler
Bryce Irvin
Marko Stamenovic
Shawn Newsam
33
0
0
21 Mar 2024
Semantic Hearing: Programming Acoustic Scenes with Binaural Hearables
Semantic Hearing: Programming Acoustic Scenes with Binaural Hearables
Bandhav Veluri
Malek Itani
Justin Chan
Takuya Yoshioka
Shyamnath Gollakota
23
15
0
01 Nov 2023
Typing to Listen at the Cocktail Party: Text-Guided Target Speaker
  Extraction
Typing to Listen at the Cocktail Party: Text-Guided Target Speaker Extraction
Xiang Hao
Jibin Wu
Jianwei Yu
Chenglin Xu
Kay Chen Tan
24
10
0
11 Oct 2023
DPM-TSE: A Diffusion Probabilistic Model for Target Sound Extraction
DPM-TSE: A Diffusion Probabilistic Model for Target Sound Extraction
Jiarui Hai
Helin Wang
Dongchao Yang
Karan Thakkar
Najim Dehak
Mounya Elhilali
DiffM
28
7
0
06 Oct 2023
Separate Anything You Describe
Separate Anything You Describe
Xubo Liu
Qiuqiang Kong
Yan Zhao
Haohe Liu
Yiitan Yuan
Yuzhuo Liu
Rui Xia
Yuxuan Wang
Mark D. Plumbley
Wenwu Wang
VLM
30
43
0
09 Aug 2023
Complete and separate: Conditional separation with missing target source
  attribute completion
Complete and separate: Conditional separation with missing target source attribute completion
Dimitrios Bralios
Efthymios Tzinis
Paris Smaragdis
35
0
0
27 Jul 2023
Audio-Visual Speech Enhancement With Selective Off-Screen Speech
  Extraction
Audio-Visual Speech Enhancement With Selective Off-Screen Speech Extraction
Tomoya Yoshinaga
Keitaro Tanaka
Shigeo Morishima
25
0
0
10 Jun 2023
CAPTDURE: Captioned Sound Dataset of Single Sources
CAPTDURE: Captioned Sound Dataset of Single Sources
Yuki Okamoto
Kanta Shimonishi
Keisuke Imoto
Kota Dohi
Shota Horiguchi
Y. Kawaguchi
26
1
0
28 May 2023
Neural Target Speech Extraction: An Overview
Neural Target Speech Extraction: An Overview
Kateřina Žmolíková
Marc Delcroix
Tsubasa Ochiai
K. Kinoshita
JanHonza'' vCernocký
Dong Yu
23
84
0
31 Jan 2023
Tackling the Cocktail Fork Problem for Separation and Transcription of
  Real-World Soundtracks
Tackling the Cocktail Fork Problem for Separation and Transcription of Real-World Soundtracks
Darius Petermann
G. Wichern
Aswin Shanmugam Subramanian
Zhong-Qiu Wang
Jonathan Le Roux
27
10
0
14 Dec 2022
CLIPSep: Learning Text-queried Sound Separation with Noisy Unlabeled
  Videos
CLIPSep: Learning Text-queried Sound Separation with Noisy Unlabeled Videos
Hao-Wen Dong
Naoya Takahashi
Yuki Mitsufuji
Julian McAuley
Taylor Berg-Kirkpatrick
VLM
CLIP
28
25
0
14 Dec 2022
Optimal Condition Training for Target Source Separation
Optimal Condition Training for Target Source Separation
Efthymios Tzinis
G. Wichern
Paris Smaragdis
Jonathan Le Roux
29
5
0
11 Nov 2022
Real-Time Target Sound Extraction
Real-Time Target Sound Extraction
Bandhav Veluri
Justin Chan
Malek Itani
Tuochao Chen
Takuya Yoshioka
Shyamnath Gollakota
36
30
0
04 Nov 2022
AudioScopeV2: Audio-Visual Attention Architectures for Calibrated
  Open-Domain On-Screen Sound Separation
AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation
Efthymios Tzinis
Scott Wisdom
Tal Remez
J. Hershey
39
29
0
20 Jul 2022
Sound Event Triage: Detecting Sound Events Considering Priority of
  Classes
Sound Event Triage: Detecting Sound Events Considering Priority of Classes
Noriyuki Tonami
Keisuke Imoto
24
1
0
13 Apr 2022
Text-Driven Separation of Arbitrary Sounds
Text-Driven Separation of Arbitrary Sounds
Kevin Kilgour
Beat Gfeller
Qingqing Huang
A. Jansen
Scott Wisdom
Marco Tagliasacchi
28
30
0
12 Apr 2022
SoundBeam: Target sound extraction conditioned on sound-class labels and
  enrollment clues for increased performance and continuous learning
SoundBeam: Target sound extraction conditioned on sound-class labels and enrollment clues for increased performance and continuous learning
Marc Delcroix
Jorge Bennasar Vázquez
Tsubasa Ochiai
K. Kinoshita
Yasunori Ohishi
S. Araki
VLM
22
31
0
08 Apr 2022
Heterogeneous Target Speech Separation
Heterogeneous Target Speech Separation
Hyunjae Cho
Wonbin Jung
Junhyeok Lee
Paris Smaragdis
Sanghyun Woo
46
26
0
07 Apr 2022
RaDur: A Reference-aware and Duration-robust Network for Target Sound
  Detection
RaDur: A Reference-aware and Duration-robust Network for Target Sound Detection
Dongchao Yang
Helin Wang
Zhongjie Ye
Yuexian Zou
Wenwu Wang
28
0
0
05 Apr 2022
Improving Target Sound Extraction with Timestamp Information
Improving Target Sound Extraction with Timestamp Information
Helin Wang
Dongchao Yang
Chao Weng
Jianwei Yu
Yuexian Zou
20
8
0
02 Apr 2022
Separate What You Describe: Language-Queried Audio Source Separation
Separate What You Describe: Language-Queried Audio Source Separation
Xubo Liu
Haohe Liu
Qiuqiang Kong
Xinhao Mei
Jinzheng Zhao
Qiushi Huang
Mark D. Plumbley
Wenwu Wang
42
58
0
28 Mar 2022
Locate This, Not That: Class-Conditioned Sound Event DOA Estimation
Locate This, Not That: Class-Conditioned Sound Event DOA Estimation
Olga Slizovskaia
G. Wichern
Zhong-Qiu Wang
Jonathan Le Roux
14
4
0
08 Mar 2022
Active Audio-Visual Separation of Dynamic Sound Sources
Active Audio-Visual Separation of Dynamic Sound Sources
Sagnik Majumder
Kristen Grauman
21
21
0
02 Feb 2022
Detect what you want: Target Sound Detection
Detect what you want: Target Sound Detection
Dongchao Yang
Helin Wang
Yuexian Zou
Fan Cui
Chao Weng
36
7
0
19 Dec 2021
Environmental Sound Extraction Using Onomatopoeic Words
Environmental Sound Extraction Using Onomatopoeic Words
Yuki Okamoto
Shota Horiguchi
Masaaki Yamamoto
Keisuke Imoto
Y. Kawaguchi
19
9
0
01 Dec 2021
The Cocktail Fork Problem: Three-Stem Audio Separation for Real-World
  Soundtracks
The Cocktail Fork Problem: Three-Stem Audio Separation for Real-World Soundtracks
Darius Petermann
G. Wichern
Zhong-Qiu Wang
Jonathan Le Roux
23
37
0
19 Oct 2021
Improving On-Screen Sound Separation for Open-Domain Videos with
  Audio-Visual Self-Attention
Improving On-Screen Sound Separation for Open-Domain Videos with Audio-Visual Self-Attention
Efthymios Tzinis
Scott Wisdom
Tal Remez
J. Hershey
VLM
24
8
0
17 Jun 2021
Few-shot learning of new sound classes for target sound extraction
Few-shot learning of new sound classes for target sound extraction
Marc Delcroix
Jorge Bennasar Vázquez
Tsubasa Ochiai
K. Kinoshita
S. Araki
VLM
21
11
0
14 Jun 2021
Move2Hear: Active Audio-Visual Source Separation
Move2Hear: Active Audio-Visual Source Separation
Sagnik Majumder
Ziad Al-Halah
Kristen Grauman
21
44
0
15 May 2021
Speech enhancement with weakly labelled data from AudioSet
Speech enhancement with weakly labelled data from AudioSet
Qiuqiang Kong
Haohe Liu
Xingjian Du
Li Chen
Rui Xia
Yuxuan Wang
15
18
0
19 Feb 2021
Into the Wild with AudioScope: Unsupervised Audio-Visual Separation of
  On-Screen Sounds
Into the Wild with AudioScope: Unsupervised Audio-Visual Separation of On-Screen Sounds
Efthymios Tzinis
Scott Wisdom
A. Jansen
Shawn Hershey
Tal Remez
D. Ellis
J. Hershey
28
69
0
02 Nov 2020
Source separation with weakly labelled data: An approach to
  computational auditory scene analysis
Source separation with weakly labelled data: An approach to computational auditory scene analysis
Qiuqiang Kong
Yuxuan Wang
Xuchen Song
Yin Cao
Wenwu Wang
Mark D. Plumbley
27
47
0
06 Feb 2020
1