A Novel Deep Learning Framework for Efficient Multichannel Acoustic Feedback Control

This study presents a deep-learning framework for controlling multichannel acoustic feedback in audio devices. Traditional digital signal processing methods struggle with convergence when dealing with highly correlated noise such as feedback. We introduce a Convolutional Recurrent Network that efficiently combines spatial and temporal processing, significantly enhancing speech enhancement capabilities with lower computational demands. Our approach utilizes three training methods: In-a-Loop Training, Teacher Forcing, and a Hybrid strategy with a Multichannel Wiener Filter, optimizing performance in complex acoustic environments. This scalable framework offers a robust solution for real-world applications, making significant advances in Acoustic Feedback Control technology.
View on arXiv@article{wu2025_2505.15914, title={ A Novel Deep Learning Framework for Efficient Multichannel Acoustic Feedback Control }, author={ Yuan-Kuei Wu and Juan Azcarreta and Kashyap Patel and Buye Xu and Jung-Suk Lee and Sanha Lee and Ashutosh Pandey }, journal={arXiv preprint arXiv:2505.15914}, year={ 2025 } }