ERANNs: Efficient Residual Audio Neural Networks for Audio Pattern Recognition

This paper presents a new convolutional neural network architecture for audio pattern recognition tasks. We introduce a new hyper-parameter for reducing the computational complexity of models. Using optimal values of this parameter, we also can save or even increase the performance of models. This observation can be confirmed by experiments on three datasets: the AudioSet dataset, the ESC-50 dataset, and RAVDESS. Our best model achieves an mAP of~0.450 on the AudioSet dataset, which is less than the performance of the state-of-the-art model, but our model is 7.1x faster and 9.7x smaller in parameter size. On the ESC-50 dataset and RAVDESS, we obtain state-of-the-art results with accuracies of~0.961 and 0.748, respectively. Our best model for the ESC-50 dataset is 1.7x faster and 2.3x smaller than the previous best model. For RAVDESS, our best model is 3.3x smaller than the state-of-the-art model. We call our models "ERANNs" (Efficient Residual Audio Neural Networks).
View on arXiv