• Title/Summary/Keyword: Speech analysis

Search Result 1,568, Processing Time 0.034 seconds

An acoustical analysis of synchronous English speech using automatic intonation contour extraction (영어 동시발화의 자동 억양궤적 추출을 통한 음향 분석)

  • Yi, So Pae
    • Phonetics and Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.97-105
    • /
    • 2015
  • This research mainly focuses on intonational characteristics of synchronous English speech. Intonation contours were extracted from 1,848 utterances produced in two different speaking modes (solo vs. synchronous) by 28 (12 women and 16 men) native speakers of English. Synchronous speech is found to be slower than solo speech. Women are found to speak slower than men. The effect size of speech rate caused by different speaking modes is greater than gender differences. However, there is no interaction between the two factors (speaking modes vs. gender differences) in terms of speech rate. Analysis of pitch point features has it that synchronous speech has smaller Pt (pitch point movement time), Pr (pitch point pitch range), Ps (pitch point slope) and Pd (pitch point distance) than solo speech. There is no interaction between the two factors (speaking modes vs. gender differences) in terms of pitch point features. Analysis of sentence level features reveals that synchronous speech has smaller Sr (sentence level pitch range), Ss (sentence slope), MaxNr (normalized maximum pitch) and MinNr (normalized minimum pitch) but greater Min (minimum pitch) and Sd (sentence duration) than solo speech. It is also shown that the higher the Mid (median pitch), the MaxNr and the MinNr in solo speaking mode, the more they are reduced in synchronous speaking mode. Max, Min and Mid show greater speaker discriminability than other features.

Matlab Implementation of Real-time Speech Analysis Tool (실시간 음성분석도구의 MatLab 구현)

  • Bak Il-suh;Kim Dae-hyun;Jo Cheol-woo
    • MALSORI
    • /
    • no.44
    • /
    • pp.93-104
    • /
    • 2002
  • There are many speech analysis tools available. Among them real-time analysis tool is very useful for interactive experiments. A real-time speech analysis tool was implemented using Matlab. Matlab is a very widely used general purpose signal processing tool. In general, its computational speed is relatively lower than that of the codes from conventional programming languages. Especially, real-time analysis including input of signal and output of the result was not possible in the past. However, due to the improvement of computing power of PCs and inclusion of real-time I/O toolboxes in Matlab, real-time analysis is now possible in some extent by Matlab only. In this experiment, we tried to implement a real-time speech analysis tool using Matlab. Pitch and spectral information is computed in real-time. From the result it is shown that such real-time applications can be implemented easily using Matlab.

  • PDF

Acoustic Analysis of Speech Disorder Associated with Motor Aphasia - A Case Report -

  • Ko, Myung-Hwan;Kim, Hyun-Ki;Kim, Yun-Hee
    • Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.97-107
    • /
    • 2000
  • Motor aphasia is an affection frequently caused by insult of the left middle cerebral artery and usually accompanied by a large lesion involving the Broca's area and the adjacent motor and premotor areas. Therefore, a patient with motor aphasia commonly shows articulatory disturbances due to failure of the motor programing of speech sound. Objective assessment and treatment of phonologic programing is one of the important aspects of speech therapy in aphasic patients. We analyzed the speech disorders acompanied with motor aphasia in a 45-year-old man using a computerized sound spectrograph, Visi-$Pitch{\circledR}$, and Multi-Dimensional Voice $Program{\circledR}$. We concluded that a computerized speech analysis system is a useful tool to visualize and quantitatively analyse the severity and progression of dysarthria, and the effect of speech therapy.

  • PDF

Speech Recognition in Noise Environment by Independent Component Analysis and Spectral Enhancement (독립 성분 분석과 스펙트럼 향상에 의한 잡음 환경에서의 음성인식)

  • Choi Seung-Ho
    • MALSORI
    • /
    • no.48
    • /
    • pp.81-91
    • /
    • 2003
  • In this paper, we propose a speech recognition method based on independent component analysis (ICA) and spectral enhancement techniques. While ICA tris to separate speech signal from noisy speech using multiple channels, some noise remains by its algorithmic limitations. Spectral enhancement techniques can compensate for lack of ICA's signal separation ability. From the speech recognition experiments with instantaneous and convolved mixing environments, we show that the proposed approach gives much improved recognition accuracies than conventional methods.

  • PDF

Integrated Visual and Speech Parameters in Korean Numeral Speech Recognition

  • Lee, Sang-won;Park, In-Jung;Lee, Chun-Woo;Kim, Hyung-Bae
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.685-688
    • /
    • 2000
  • In this paper, we used image information for the enhancement of Korean numeral speech recognition. First, a noisy environment was made by Gaussian generator at each 10 dB level and the generated signal was added to original Korean numeral speech. And then, the speech was analyzed to recognize Korean numeral speech. Speech through microphone was pre-emphasized with 0.95, Hamming window, autocorrelation and LPC analysis was used. Second, the image obtained by camera, was converted to gray level, autocorrelated, and analyzed using LPC algorithm, to which was applied in speech analysis, Finally, the Korean numerial speech recognition with image information was more ehnanced than speech-only, especially in ‘3’, ‘5’and ‘9’. As the same LPC algorithm and simple image management was used, additional computation a1gorithm like a filtering was not used, a total speech recognition algorithm was made simple.

  • PDF

A Study of Speech Control Tags Based on Semantic Information of a Text (텍스트의 의미 정보에 기반을 둔 음성컨트롤 태그에 관한 연구)

  • Chang, Moon-Soo;Chung, Kyeong-Chae;Kang, Sun-Mee
    • Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.187-200
    • /
    • 2006
  • The speech synthesis technology is widely used and its application area is also being broadened to an automatic response service, a learning system for handicapped person, etc. However, the sound quality of the speech synthesizer has not yet reached to the satisfactory level of users. To make a synthesized speech, the existing synthesizer generates rhythms only by the interval information such as space and comma or by several punctuation marks such as a question mark and an exclamation mark so that it is not easy to generate natural rhythms of people even though it is based on mass speech database. To make up for the problem, there is a way to select rhythms after processing language from a higher level information. This paper proposes a method for generating tags for controling rhythms by analyzing the meaning of sentence with speech situation information. We use the Systemic Functional Grammar (SFG) [4] which analyzes the meaning of sentence with speech situation information considering the sentence prior to the given one, the situation of a conversation, the relationship among people in the conversation, etc. In this study, we generate Semantic Speech Control Tag (SSCT) by the result of SFG's meaning analysis and the voice wave analysis.

  • PDF

A Correlation Study among Acoustic Parameters of MDVP, Praat, and Dr. Speech (MDVP와 Praat, Dr. Speech간의 음향학적 측정치에 관한 상관연구)

  • Yoo, Jae-Yeon;Jeong, Ok-Ran;Jang, Tae-Yeoub;Ko, Do-Heung
    • Speech Sciences
    • /
    • v.10 no.3
    • /
    • pp.29-36
    • /
    • 2003
  • The purposes of this study was to conduct a correlational analysis among $F_^{0}$, Jitter, Shimmer, and NHR (HNR), and NNE estimated by three speech analysis softwares, MDVP, Praat and Dr. Speech. Thirty females and 15 males with normal voice participated in the study. We used Sound Forge 6.0 to record their voice. MDVP, Praat and Dr. Speech were used to measure the acoustic parameters. The Pearson correlation coefficient was determined through a statistical analysis. The results came out as follows: Firstly, there was a strong correlation between $F_^{0}$ and Shimmer of both instruments. However, there was no correlation between Jitter of both instruments. Secondly, Shimmer showed a stronger correlation with HNR, NHR, and NNE than Jitter. Therefore, Shimmer was considered to be more useful and sensitive parameter to identify dysphonic voice compared to jitter.

  • PDF

Analysis of Mobile Application Trends for Speech and Language Therapy of Children with Disabilities in Korea (국내 장애 아동을 위한 언어치료용 모바일 어플리케이션 현황 분석)

  • Lee, Youngmee;Lee, Soobok;Sung, Minkyoung
    • Phonetics and Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.153-163
    • /
    • 2015
  • This study investigated the trends of mobile applications which were developed for prompting speech and language skills for children with disabilities, and analyzed the function and contents of these applications as a tool of speech and language therapy. For this analysis, twenty applications among 71 ones were selected according to the exclusion criteria. These applications were classified by the 8 using types of contents and analyzed the function of mobile applications by the revised mobile contents evaluation standard (ease of use, value of education, interest level, and interactivity). As a results, applications for augmentative and alternative communication were developed much more than any other types. And the ease of use got the highest score whereas the interest level got the lowest score in whole evaluation analysis. The result of this study would suggest way to evaluate applications for speech language therapy and to contribute to developing the contents and function of mobile applications aims to help children with disabilities improving their speech and language skills.

Characteristics of voice quality on clear versus casual speech in individuals with Parkinson's disease (명료발화와 보통발화에서 파킨슨병환자 음성의 켑스트럼 및 스펙트럼 분석)

  • Shin, Hee-Baek;Shim, Hee-Jeong;Jung, Hun;Ko, Do-Heung
    • Phonetics and Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.77-84
    • /
    • 2018
  • The purpose of this study is to examine the acoustic characteristics of Parkinsonian speech, with respect to different utterance conditions, by employing acoustic/auditory-perceptual analysis. The subjects of the study were 15 patients (M=7, F=8) with Parkinson's disease who were asked to read out sentences under different utterance conditions (clear/casual). The sentences read out by each subject were recorded, and the recorded speech was subjected to cepstrum and spectrum analysis using Analysis of Dysphonia in Speech and Voice (ADSV). Additionally, auditory-perceptual evaluation of the recorded speech was conducted with respect to breathiness and loudness. Results indicate that in the case of clear speech, there was a statistically significant increase in the cepstral peak prominence (CPP), and a decrease in the L/H ratio SD (ratio of low to high frequency spectral energy SD) and CPP F0 SD values. In the auditory-perceptual evaluation, a decrease in breathiness and an increase in loudness were noted. Furthermore, CPP was found to be highly correlated to breathiness and loudness. This provides objective evidence of the immediate usefulness of clear speech intervention in improving the voice quality of Parkinsonian speech.

Vowel Space Area and Speech Intelligibility of Children with Cochlear Implants (인공와우이식 아동의 모음공간면적과 말명료도)

  • Park, Hyemi;Huh, Myungjin
    • Phonetics and Speech Sciences
    • /
    • v.6 no.2
    • /
    • pp.89-96
    • /
    • 2014
  • This study measured speech intelligibility in relation to the vowel space area and the perception of the listener through acoustic analysis of children who had received cochlear implants. It also provided basic data in the evaluation of speech intelligibility by analyzing the correlation between the vowel space area and speech intelligibility. As a research method, the vowel space area was analyzed by obtaining the value of $F_1$, $F_2$ in children three years after receiving cochlear implants, and compared them to normal children by measuring speech intelligibility through interval scaling. A product-moment correlation analysis was conducted to investigate the correlation. Results showed that the vowel space area of the children who had received cochlear implants was significantly different from that of the normal children, though their speech intelligibility showed similar points to those of the normal children. The results of the correlation analysis on the vowel space area and speech intelligibility showed no significant correlation. Therefore, the period of improving intelligibility after receiving cochlear implants and the objective standards of the vowel space area could be established. In addition, the acoustic rating was required to increase the accuracy of the objective measurement in the evaluation of speech intelligibility.