45
0

Whispering in Amharic: Fine-tuning Whisper for Low-resource Language

Abstract

This work explores fine-tuning OpenAI's Whisper automatic speech recognition (ASR) model for Amharic, a low-resource language, to improve transcription accuracy. While the foundational Whisper model struggles with Amharic due to limited representation in its training data, we fine-tune it using datasets like Mozilla Common Voice, FLEURS, and the BDU-speech dataset. The best-performing model, Whispersmall-am, significantly improves when finetuned on a mix of existing FLEURS data and new, unseen Amharic datasets. Training solely on new data leads to poor performance, but combining it with FLEURS data reinforces the model, enabling better specialization in Amharic. We also demonstrate that normalizing Amharic homophones significantly enhances Word Error Rate (WER) and Bilingual Evaluation Understudy (BLEU) scores. This study underscores the importance of fine-tuning strategies and dataset composition for improving ASR in low-resource languages, providing insights for future Amharic speech recognition research.

View on arXiv
@article{gete2025_2503.18485,
  title={ Whispering in Amharic: Fine-tuning Whisper for Low-resource Language },
  author={ Dawit Ketema Gete and Bedru Yimam Ahmed and Tadesse Destaw Belay and Yohannes Ayana Ejigu and Sukairaj Hafiz Imam and Alemu Belay Tessema and Mohammed Oumer Adem and Tadesse Amare Belay and Robert Geislinger and Umma Aliyu Musa and Martin Semmann and Shamsuddeen Hassan Muhammad and Henning Schreiber and Seid Muhie Yimam },
  journal={arXiv preprint arXiv:2503.18485},
  year={ 2025 }
}
Comments on this paper