Connecting Voices: LoReSpeech as a Low-Resource Speech Parallel Corpus

Aligned audio corpora are fundamental to NLP technologies such as ASR and speech translation, yet they remain scarce for underrepresented languages, hindering their technological integration. This paper introduces a methodology for constructing LoReSpeech, a low-resource speech-to-speech translation corpus. Our approach begins with LoReASR, a sub-corpus of short audios aligned with their transcriptions, created through a collaborative platform. Building on LoReASR, long-form audio recordings, such as biblical texts, are aligned using tools like the MFA. LoReSpeech delivers both intra- and inter-language alignments, enabling advancements in multilingual ASR systems, direct speech-to-speech translation models, and linguistic preservation efforts, while fostering digital inclusivity. This work is conducted within Tutlayt AI project (this https URL).
View on arXiv@article{ouzerrout2025_2502.18215, title={ Connecting Voices: LoReSpeech as a Low-Resource Speech Parallel Corpus }, author={ Samy Ouzerrout }, journal={arXiv preprint arXiv:2502.18215}, year={ 2025 } }