• 제목/요약/키워드: Speech analysis

검색결과 1,568건 처리시간 0.034초

인공와우이식 아동 말용인도의 예측 변인 (Variables for Predicting Speech Acceptability of Children with Cochlear Implants)

  • 윤미선
    • 말소리와 음성과학
    • /
    • 제6권4호
    • /
    • pp.171-179
    • /
    • 2014
  • Purposes: Speech acceptability means the subjective judgement of listeners regarding the naturalness and normality of the speech. The purpose of this study was to determine the predicting variables for speech acceptabilities of children with cochlear implants. Methods: Twenty seven children with CI participated. They had profound pre-lingual hearing loss without any additional disabilities. The mean of chronological ages was 8;9, and mean of age of implantation was 2;11. Speech samples of reading and spontaneous speech were recorded separately. Twenty college students who were not familiar to the speech of deaf children evaluated the speech acceptabilities using visual analog scale. 1 segmental (articulation) and 6 suprasegmental features (pitch, loudness, quality, resonance, intonation, and speaking rate) of speech were perceptually evaluated by 3 SLPs. Correlation and multiple regression analysis were performed to evaluate the predicting variables. Results: The means of speech acceptability for reading and spontaneous speech were 73.47 and 71.96, respectively. Speech acceptability of reading was predicated by the severity of intonation and articulation. Speech acceptability of spontaneous speech was predicated by the severity of intonation and loudness. Discussion and conclusion: Severity of intonation was the most effective variable to predict the speech acceptabilities of both reading and spontaneous speech. A further study would be necessary to generalize the result and to apply this result to intervention in clinical settings.

Speech Feature Extraction Based on the Human Hearing Model

  • Chung, Kwang-Woo;Kim, Paul;Hong, Kwang-Seok
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 1996년도 10월 학술대회지
    • /
    • pp.435-447
    • /
    • 1996
  • In this paper, we propose the method that extracts the speech feature using the hearing model through signal processing techniques. The proposed method includes the following procedure ; normalization of the short-time speech block by its maximum value, multi-resolution analysis using the discrete wavelet transformation and re-synthesize using the discrete inverse wavelet transformation, differentiation after analysis and synthesis, full wave rectification and integration. In order to verify the performance of the proposed speech feature in the speech recognition task, korean digit recognition experiments were carried out using both the DTW and the VQ-HMM. The results showed that, in the case of using DTW, the recognition rates were 99.79% and 90.33% for speaker-dependent and speaker-independent task respectively and, in the case of using VQ-HMM, the rate were 96.5% and 81.5% respectively. And it indicates that the proposed speech feature has the potential for use as a simple and efficient feature for recognition task

  • PDF

자율이동로봇의 명령 교시를 위한 HMM 기반 음성인식시스템의 구현 (Implementation of Hidden Markov Model based Speech Recognition System for Teaching Autonomous Mobile Robot)

  • 조현수;박민규;이민철
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.281-281
    • /
    • 2000
  • This paper presents an implementation of speech recognition system for teaching an autonomous mobile robot. The use of human speech as the teaching method provides more convenient user-interface for the mobile robot. In this study, for easily teaching the mobile robot, a study on the autonomous mobile robot with the function of speech recognition is tried. In speech recognition system, a speech recognition algorithm using HMM(Hidden Markov Model) is presented to recognize Korean word. Filter-bank analysis model is used to extract of features as the spectral analysis method. A recognized word is converted to command for the control of robot navigation.

  • PDF

Speech Quality of a Sinusoidal Model Depending on the Number of Sinusoids

  • Seo, Jeong-Wook;Kim, Ki-Hong;Seok, Jong-Won;Bae, Keun-Sung
    • 음성과학
    • /
    • 제7권1호
    • /
    • pp.17-29
    • /
    • 2000
  • The STC(Sinusoidal Transform Coding) is a vocoding technique that uses a sinusoidal speech model to obtain high- quality speech at low data rate. It models and synthesizes the speech signal with fundamental frequency and its harmonic elements in frequency domain. To reduce the data rate, it is necessary to represent the sinusoidal amplitudes and phases with as small number of peaks as possible while maintaining the speech quality. As a basic research to develop a low-rate speech coding algorithm using the sinusoidal model, in this paper, we investigate the speech quality depending on the number of sinusoids. By varying the number of spectral peaks from 5 to 40 speech signals are reconstructed, and then their qualities are evaluated using spectral envelope distortion measure and MOS(Mean Opinion Score). Two approaches are used to obtain the spectral peaks: one is a conventional STFT (Short-Time Fourier Transform), and the other is a multiresolutional analysis method.

  • PDF

뇌성마비 성인의 일상발화와 명료한 발화에서의 모음의 음향적 특성 (Acoustic properties of vowels produced by cerebral palsic adults in conversational and clear speech)

  • 고현주;김수진
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2006년도 춘계 학술대회 발표논문집
    • /
    • pp.101-104
    • /
    • 2006
  • The present study examined two acoustic characteristics(duration and intensity) of vowels produced by 4 cerebral palsic adults and 4 nondisabled adults in conversational and clear speech. In this study, clear speech means: (1) slow one's speech rate just a little, (2) articulate all phonemes accurately and increase vocal volume. Speech material included 10 bisyllabic real words in the frame sentences. Temporal-acoustic analysis showed that vowels produced by two speaker groups in clear speech(in this case, more accurate and louder speech) were significantly longer than vowels in conversational speech. In addition, intensity of vowels produced by cerebral palsic speakers in clear speech(in this case, more accurate and louder speech) was higher than in conversational speech.

  • PDF

Digital enhancement of pronunciation assessment: Automated speech recognition and human raters

  • Miran Kim
    • 말소리와 음성과학
    • /
    • 제15권2호
    • /
    • pp.13-20
    • /
    • 2023
  • This study explores the potential of automated speech recognition (ASR) in assessing English learners' pronunciation. We employed ASR technology, acknowledged for its impartiality and consistent results, to analyze speech audio files, including synthesized speech, both native-like English and Korean-accented English, and speech recordings from a native English speaker. Through this analysis, we establish baseline values for the word error rate (WER). These were then compared with those obtained for human raters in perception experiments that assessed the speech productions of 30 first-year college students before and after taking a pronunciation course. Our sub-group analyses revealed positive training effects for Whisper, an ASR tool, and human raters, and identified distinct human rater strategies in different assessment aspects, such as proficiency, intelligibility, accuracy, and comprehensibility, that were not observed in ASR. Despite such challenges as recognizing accented speech traits, our findings suggest that digital tools such as ASR can streamline the pronunciation assessment process. With ongoing advancements in ASR technology, its potential as not only an assessment aid but also a self-directed learning tool for pronunciation feedback merits further exploration.

Intra-and Inter-frame Features for Automatic Speech Recognition

  • Lee, Sung Joo;Kang, Byung Ok;Chung, Hoon;Lee, Yunkeun
    • ETRI Journal
    • /
    • 제36권3호
    • /
    • pp.514-517
    • /
    • 2014
  • In this paper, alternative dynamic features for speech recognition are proposed. The goal of this work is to improve speech recognition accuracy by deriving the representation of distinctive dynamic characteristics from a speech spectrum. This work was inspired by two temporal dynamics of a speech signal. One is the highly non-stationary nature of speech, and the other is the inter-frame change of a speech spectrum. We adopt the use of a sub-frame spectrum analyzer to capture very rapid spectral changes within a speech analysis frame. In addition, we attempt to measure spectral fluctuations of a more complex manner as opposed to traditional dynamic features such as delta or double-delta. To evaluate the proposed features, speech recognition tests over smartphone environments were conducted. The experimental results show that the feature streams simply combined with the proposed features are effective for an improvement in the recognition accuracy of a hidden Markov model-based speech recognizer.

MMSE-STSA 기반의 음성개선 기법에서 잡음 및 신호 전력 추정에 사용되는 파라미터 값의 변화에 따른 잡음음성의 인식성능 분석 (Performance Analysis of Noisy Speech Recognition Depending on Parameters for Noise and Signal Power Estimation in MMSE-STSA Based Speech Enhancement)

  • 박철호;배건성
    • 대한음성학회지:말소리
    • /
    • 제57호
    • /
    • pp.153-164
    • /
    • 2006
  • The MMSE-STSA based speech enhancement algorithm is widely used as a preprocessing for noise robust speech recognition. It weighs the gain of each spectral bin of the noisy speech using the estimate of noise and signal power spectrum. In this paper, we investigate the influence of parameters used to estimate the speech signal and noise power in MMSE-STSA upon the recognition performance of noisy speech. For experiments, we use the Aurora2 DB which contains noisy speech with subway, babble, car, and exhibition noises. The HTK-based continuous HMM system is constructed for recognition experiments. Experimental results are presented and discussed with our findings.

  • PDF

잡음음성에서의 음성 활성화 구간 검출 방법 (Speech Active Interval Detection Method in Noisy Speech)

  • 이광석;추연규;김현덕
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2008년도 추계종합학술대회 B
    • /
    • pp.779-782
    • /
    • 2008
  • 음성통신 및 음성인식에 있어서 잡음이 섞인 음성으로부터 음성의 활성화 구간의 검출은 대단히 중요한 과정으로 알려져 있다. 따라서 본 연구에서는 잡음음성으로부터 음성의 활성화 구간을 검출하기 위하여 스펙트럴 엔트로피와 복합으로 구성하는 특징 파라미터를 제안하고 에너지를 기반으로 음성 활성화 구간을 검출하는 방식과 성능 비교 실험을 행하였다. 실험결과, 노이즈 환경에서 다른 파라미터에 비하여 제안한 파라미터에 의한 음성 활성화 구간 검출의 성능이 우수함을 확인할 수 있었다.

  • PDF

How Korean Learner's English Proficiency Level Affects English Speech Production Variations

  • Hong, Hye-Jin;Kim, Sun-Hee;Chung, Min-Hwa
    • 말소리와 음성과학
    • /
    • 제3권3호
    • /
    • pp.115-121
    • /
    • 2011
  • This paper examines how L2 speech production varies according to learner's L2 proficiency level. L2 speech production variations are analyzed by quantitative measures at word and phone levels using Korean learners' English corpus. Word-level variations are analyzed using correctness to explain how speech realizations are different from the canonical forms, while accuracy is used for analysis at phone level to reflect phone insertions and deletions together with substitutions. The results show that speech production of learners with different L2 proficiency levels are considerably different in terms of performance and individual realizations at word and phone levels. These results confirm that speech production of non-native speakers varies according to their L2 proficiency levels, even though they share the same L1 background. Furthermore, they will contribute to improve non-native speech recognition performance of ASR-based English language educational system for Korean learners of English.

  • PDF