year 8, Issue 2 (Journal of Acoustical Engineering Society of Iran 2021)                   مجله علمی پژوهشی انجمن علوم صوتی ایران 2021, 8(2): 51-59 | Back to browse issues page

XML Persian Abstract Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Asgari M, Akbari N. Improving the average precision of Persian vowel classification from speech signal by using convolutional neural network (Research Article). مجله علمی پژوهشی انجمن علوم صوتی ایران 2021; 8 (2) :51-59
URL: http://joasi.ir/article-1-173-en.html
Abstract:   (2075 Views)
One approach to speech recognition is to model speech based on a number of phonetic units. Because the frequency and temporal characteristics of vowels are more stable than other phonems, it is important to recognize vowels to distinguish speech. In this research, the aim is to present a model using modern methods, such as deep neural network to improve the accuracy of vowel recognition and increase its applications. 30 speakers (15 females and 15 males) read all the combinations of consonants with 6 Persian vowels. After preprocessing, the speech data is segmented into frames containing only the vowels and its spectrogram is extracted. These spectrogram are given as input to the neural network with two hidden layers. Speech of 25 speakers were used for training and speech of 5 speakers were used for testing. The average of accuracy of 6 Persian vowels for the proposed model was 93.17% (total average of vowel detection error is 6.83%). In previous works the average of vowel detection error was 9.7% to 19.6% that in proposed model improved from 2.87% to 12.77%.
Full-Text [PDF 85 kb]   (813 Downloads)    
Type of Study: Applicable | Subject: Signal Processing
Received: 2020/02/16 | Accepted: 2021/03/2 | Published: 2021/03/10

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.