• Title/Summary/Keyword: Speech Visualization

Search Result 24, Processing Time 0.035 seconds

SPEECH TRAINING TOOLS BASED ON VOWEL SWITCH/VOLUME CONTROL AND ITS VISUALIZATION

  • Ueda, Yuichi;Sakata, Tadashi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.441-445
    • /
    • 2009
  • We have developed a real-time software tool to extract a speech feature vector whose time sequences consist of three groups of vector components; the phonetic/acoustic features such as formant frequencies, the phonemic features as outputs on neural networks, and some distances of Japanese phonemes. In those features, since the phoneme distances for Japanese five vowels are applicable to express vowel articulation, we have designed a switch, a volume control and a color representation which are operated by pronouncing vowel sounds. As examples of those vowel interface, we have developed some speech training tools to display a image character or a rolling color ball and to control a cursor's movement for aurally- or vocally-handicapped children. In this paper, we introduce the functions and the principle of those systems.

  • PDF

Teaching Pronunciation Using Sound Visualization Technology to EFL Learners

  • Min, Su-Jung;Pak, Hubert H.
    • English Language & Literature Teaching
    • /
    • v.13 no.2
    • /
    • pp.129-153
    • /
    • 2007
  • When English language teachers are deciding on their priorities for teaching pronunciation, it is imperative to know what kind of differences and errors are most likely to interfere with communication, and what special problems particular first-language speakers will have with English pronunciation. In other words, phoneme discrimination skill is an integral part of speech processing for the EFL learners' learning to converse in English. Training using sound visualization technique can be effective in improving second language learners' perceptions and productions of segmental and suprasegmental speech contrasts. This study assessed the efficacy of a pronunciation training that provided visual feedback for EFL learners acquiring pitch and durational contrasts to produce and perceive English phonemic distinctions. The subjects' ability to produce and to perceive novel English words was tested in two contexts before and after training; words in isolation and words in sentences. In comparison with an untrained control group, trainees showed improved perceptual and productive performance, transferred their knowledge to new contexts, and maintained their improvement three months after training. These findings support the feasibility of learner-centered programs using sound visualization technique for English language pronunciation instruction.

  • PDF

Learning French Intonation with a Base of the Visualization of Melody (억양의 시각화를 통한 프랑스어의 억양학습)

  • Lee, Jung-Won
    • Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.63-71
    • /
    • 2003
  • This study aims to experiment on learning French intonation, based on the visualization of melody, which was employed in the early sixties to reeducate those with communication disorders. The visualization of melody in this paper, however, was used to the foreign language learning and produced successful results in many ways, especially in learning foreign intonation. In this paper, we used the PitchWorks to visualize some French intonation samples and experiment on learning intonation based on the bitmap picture projected on a screen. The students could see the melody curve while listening to the sentences. We could observe great achievement on the part of the students in learning intonations, as verified by the result of this experiment. The students were much more motivated in learning and showed greater improvement in recognizing intonation contour than just learning by hearing. But lack of animation in the bitmap file could make the experiment nothing but a boring pattern practices. It would be better if we can use a sound analyser, as like for instance a PitchWorks, which is designed to analyse the pitch, since the students can actually see their own fluctuating intonation visualized on the screen.

  • PDF

Speech Visualization of Korean Vowels Based on the Distances Among Acoustic Features (음성특징의 거리 개념에 기반한 한국어 모음 음성의 시각화)

  • Pok, Gouchol
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.5
    • /
    • pp.512-520
    • /
    • 2019
  • It is quite useful to represent speeches visually for learners who study foreign languages as well as the hearing impaired who cannot directly hear speeches, and a number of researches have been presented in the literature. They remain, however, at the level of representing the characteristics of speeches using colors or showing the changing shape of lips and mouth using the animation-based representation. As a result of such approaches, those methods cannot tell the users how far their pronunciations are away from the standard ones, and moreover they make it technically difficult to develop such a system in which users can correct their pronunciation in an interactive manner. In order to address these kind of drawbacks, this paper proposes a speech visualization model based on the relative distance between the user's speech and the standard one, furthermore suggests actual implementation directions by applying the proposed model to the visualization of Korean vowels. The method extract three formants F1, F2, and F3 from speech signals and feed them into the Kohonen's SOM to map the results into 2-D screen and represent each speech as a pint on the screen. We have presented a real system implemented using the open source formant analysis software on the speech of a Korean instructor and several foreign students studying Korean language, in which the user interface was built using the Javascript for the screen display.

Consonant Confusions Matrices in Adults with Dysarthria Associated with Cerebral Palsy (뇌성마비로 인한 마비말장애 성인의 자음 오류 분석)

  • Lee, Youngmee;Sung, JeeEun;Sim, HyunSub
    • Phonetics and Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.47-54
    • /
    • 2013
  • The aim of this study was to analyze consonant articulation errors produced by 90 speakers with cerebral palsy (CP). Phonetic transcriptions were made for 37 single-word utterances containing 70 phonemes: 48 initial consonants and 22 final consonants. Errors of substitution, omission, and distortion were analyzed using a confusion matrix paradigm showing the visualization of error patterns. Results showed that substitution errors in initial and final consonants were most frequent, followed by omission and distortion. Consonant omission occurred more frequently on final consonants. In both initial and final consonants, the within-place errors were more prominent than the within-manner errors. The current results suggest that consonant confusion matrices for dysarthric speech may provide useful information for evaluating speech intelligibility and developing automatic speech recognition system of adults with CP associated dysarthria.

認知建枸主義教學說計 在漢語發音教育中的必要性

  • Lee, Seon-Hui
    • 중국학논총
    • /
    • no.66
    • /
    • pp.85-103
    • /
    • 2020
  • We use prototypes (also known as referent in semiotics) when we understand the outside world. Different language users use different prototypes to decode the same sound. When we learn Chinese language as a foreign language, during it's sound perceptual process, Korean learners' target language prototypes are different from Chinese native speakers'. The purpose of the paper is to examine the theory of speech perception and the theory of constructivism teaching, and to suggest to the Chinese language teachers to have Cunstructivist approach while they design there teaching course. For this, we concerned three things: First is to review speech perception theory and constructivism teaching theory. Second based on the preceding study, we review that learner's prototypes are different from Chinese native speaker and this cause the error of listening and pronunciation. Finally, we introduced two simple speech visualization programs developed to help us learn pronunciation.

Visualization of Korean Speech Based on the Distance of Acoustic Features (음성특징의 거리에 기반한 한국어 발음의 시각화)

  • Pok, Gou-Chol
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.3
    • /
    • pp.197-205
    • /
    • 2020
  • Korean language has the characteristics that the pronunciation of phoneme units such as vowels and consonants are fixed and the pronunciation associated with a notation does not change, so that foreign learners can approach rather easily Korean language. However, when one pronounces words, phrases, or sentences, the pronunciation changes in a manner of a wide variation and complexity at the boundaries of syllables, and the association of notation and pronunciation does not hold any more. Consequently, it is very difficult for foreign learners to study Korean standard pronunciations. Despite these difficulties, it is believed that systematic analysis of pronunciation errors for Korean words is possible according to the advantageous observations that the relationship between Korean notations and pronunciations can be described as a set of firm rules without exceptions unlike other languages including English. In this paper, we propose a visualization framework which shows the differences between standard pronunciations and erratic ones as quantitative measures on the computer screen. Previous researches only show color representation and 3D graphics of speech properties, or an animated view of changing shapes of lips and mouth cavity. Moreover, the features used in the analysis are only point data such as the average of a speech range. In this study, we propose a method which can directly use the time-series data instead of using summary or distorted data. This was realized by using the deep learning-based technique which combines Self-organizing map, variational autoencoder model, and Markov model, and we achieved a superior performance enhancement compared to the method using the point-based data.

Voice Expression using a Cochlear Filter Model

  • Jarng, Soon-Suck
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.1E
    • /
    • pp.20-28
    • /
    • 1996
  • Speech sounds were practically applied to a cochlear filter which was simulated by an electrical transmission line. The amplitude of the basilar membrane displacement was calculated along the length of the cochlea in temporal response. And the envelope of the amplitude according to the length was arranged for each discrete time interval. The resulting time response of the speech sound was then displayed as a color image. Five vowels such as a, e, I, o, u were applied and their results were compared. The whole procedure of the visualization method of the speech sound using the cochlear filter is described in detail. The filter model response to voice is visualized by passing the voice through the cochlear filter model.

  • PDF

Investigation on Dynamic Behavior of Formant Information (포만트 정보의 동적 변화특성 조사에 관한 연구)

  • Jo, Cheolwoo
    • Phonetics and Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.157-162
    • /
    • 2015
  • This study reports on the effective way of displaying dynamic formant information on F1-F2 space. Conventional ways of F1-F2 space (different name of vowel triangle or vowel rectangle) have been used for investigating vowel characteristics of a speaker or a language based on statistics of the F1 and F2 values, which were computed by spectral envelope search method. Those methods were dealing mainly with the static information of the formants, not the changes of the formant values (i.e. dynamic information). So a better way of investigating dynamic informations from the formant values of speech signal is suggested so that more convenient and detailed investigation of the dynamic changes can be achieved on F1-F2 space. Suggested method used visualization of static and dynamic information in overlapped way to be able to observe the change of the formant information easily. Finally some examples of the implemented display on some cases of the continuous vowels are shown to prove the usefulness of suggested method.

Development of a 3D-Graphics Based Visualization Application for Reliability-Centered Maintenance (신뢰도 중심 유지보수 기법을 이용한 3차원 기반의 변전소 유지보수 시각화 프로그램 개발)

  • Jung, Hong-Suk;Park, Chang-Hyun;Jang, Gil-Soo
    • Proceedings of the KIEE Conference
    • /
    • 2007.11b
    • /
    • pp.288-290
    • /
    • 2007
  • This paper presents a visualization application using 3D-graphics for effective maintenance of power equipment. The maintenance algorithm implemented in the application is based on Condition-Based Maintenance (CBM) and Reliability -Centered Maintenance (RCM). The main frame of the developed application was made up based on Windows Application Programming Interface (API) and Microsoft Fundamental Classes (MFC). In order to develop the interactive 3D application, the WorldToolKit (WTK) library based on Open GL was used. Also Text-to-Speech (TTS) technology was used to enhance the efficiency of operators. The developed application can help the power system operators to intuitively recognize the present state and maintenance information of the equipment.

  • PDF