129
182

Unsupervised Learning of Audio Segment Representations using Sequence-to-sequence Recurrent Neural Networks

Abstract

Representing audio segments expressed with variable-length acoustic feature sequences as fixed-length feature vectors is usually needed in many speech applications, including speaker identification, audio emotion classification and spoken term detection (STD). In this paper, we apply and extend sequence-to-sequence learning framework to learn representations for audio segments without any supervision. The model we used is called Sequence-to-sequence Autoencoder (SA), which consists of two RNNs equipped with Long Short-Term Memory (LSTM) units: the first RNN acts as an encoder that maps the input sequence into a vector representation of fixed dimensionality, and the second RNN acts as a decoder that maps the representation back to the input sequence. The two RNNs are then jointly trained by minimizing the reconstruction error. We further propose Denoising Sequence-to-sequence Autoencoder (DSA) that improves the learned representations. The vector representations learned by SA and DSA are shown to be very helpful for query-by-example STD. The experimental results have shown that the proposed models achieved better retrieval performance than using audio segment representation designed heuristically and the classical Dynamic Time Warping (DTW) approach.

View on arXiv
Comments on this paper