Speech dysfluency detection is crucial for clinical diagnosis and language assessment, but existing methods are limited by the scarcity of high-quality annotated data. Although recent advances in TTS model have enabled synthetic dysfluency generation, existing synthetic datasets suffer from unnatural prosody and limited contextual diversity. To address these limitations, we propose LLM-Dys -- the most comprehensive dysfluent speech corpus with LLM-enhanced dysfluency simulation. This dataset captures 11 dysfluency categories spanning both word and phoneme levels. Building upon this resource, we improve an end-to-end dysfluency detection framework. Experimental validation demonstrates state-of-the-art performance. All data, models, and code are open-sourced atthis https URL.
View on arXiv@article{zhang2025_2505.22029, title={ Analysis and Evaluation of Synthetic Data Generation in Speech Dysfluency Detection }, author={ Jinming Zhang and Xuanru Zhou and Jiachen Lian and Shuhe Li and William Li and Zoe Ezzes and Rian Bogley and Lisa Wauters and Zachary Miller and Jet Vonk and Brittany Morin and Maria Gorno-Tempini and Gopala Anumanchipalli }, journal={arXiv preprint arXiv:2505.22029}, year={ 2025 } }