Please use this identifier to cite or link to this item: https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4777
Title: Stuttered Speech Synthesis using Limited data
Authors: ABEYWICKRAMA, N.H.L
Issue Date: May-2024
Abstract: Abstract Stuttering speech poses a significant challenge for modern speech applications, primarily due to limited data availability for developing these systems. Recent efforts employing deep learning have aimed to synthesize stuttering speech to mitigate the data scarcity issue. However, creating such systems depends on the availability of sufficient data, which remains a persistent challenge. Before the advent of deep learning, speech synthesis methods existed that could function with very little data. This study proposes a simple and straightforward approach to automatically generate repetition stuttering speech using these low-resource speech synthesis methods. The study examines both sound and word repetition, utilizing a dataset of only 50 samples of stuttering data, in contrast to the large datasets typically needed for deep learning. Through the use concatenative speech synthesis method, the proposed solution allows for the synthesis of repetition stuttering based on it’s structure which ensures the realistic nature of stuttering. The proposed method synthesizes repetition stuttering with high accuracy, with a mean MCD score of 8.9. Evaluation using objective measures for speech quality and intelligibility resulted an average score of 1.37 and 0.24, respectively. These results indicate that the proposed method is effective and reliable, achieving similar outcomes to deep learning methods while utilizing a more manageable amount of available data.
URI: https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4777
Appears in Collections:2024

Files in This Item:
File Description SizeFormat 
2019 CS 003.pdf2.46 MBAdobe PDFView/Open


Items in UCSC Digital Library are protected by copyright, with all rights reserved, unless otherwise indicated.