• Title/Summary/Keyword: Emotional speech

Search Result 178, Processing Time 0.032 seconds

Modelling and Synthesis of Emotional Speech on Multimedia Environment (멀티미디어 환경을 위한 정서음성의 모델링 및 합성에 관한 연구)

  • Jo, Cheol-Woo;Kim, Dae-Hyun
    • Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.35-47
    • /
    • 1999
  • This paper describes procedures to model and synthesize emotional speech in a multimedia environment. At first, procedures to model the visual representation of emotional speech are proposed. To display the sequences of the images in synchronized form with speech, MSF(Multimedia Speech File) format is proposed and the display software is implemented. Then the emotional speech sinal is collected and analysed to obtain the prosodic characteristics of the emotional speech in limited domain. Multi-emotional sentences are spoken by actors. From the emotional speech signals, prosodic structures are compared in terms of the pseudo-syntactic structure. Based on the analyzed result, neutral speech is transformed into a specific emotinal state by modifying the prosodic structures.

  • PDF

Recognition of Emotion and Emotional Speech Based on Prosodic Processing

  • Kim, Sung-Ill
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3E
    • /
    • pp.85-90
    • /
    • 2004
  • This paper presents two kinds of new approaches, one of which is concerned with recognition of emotional speech such as anger, happiness, normal, sadness, or surprise. The other is concerned with emotion recognition in speech. For the proposed speech recognition system handling human speech with emotional states, total nine kinds of prosodic features were first extracted and then given to prosodic identifier. In evaluation, the recognition results on emotional speech showed that the rates using proposed method increased more greatly than the existing speech recognizer. For recognition of emotion, on the other hands, four kinds of prosodic parameters such as pitch, energy, and their derivatives were proposed, that were then trained by discrete duration continuous hidden Markov models(DDCHMM) for recognition. In this approach, the emotional models were adapted by specific speaker's speech, using maximum a posteriori(MAP) estimation. In evaluation, the recognition results on emotional states showed that the rates on the vocal emotions gradually increased with an increase of adaptation sample number.

A Study on Image Recommendation System based on Speech Emotion Information

  • Kim, Tae Yeun;Bae, Sang Hyun
    • Journal of Integrative Natural Science
    • /
    • v.11 no.3
    • /
    • pp.131-138
    • /
    • 2018
  • In this paper, we have implemented speeches that utilized the emotion information of the user's speech and image matching and recommendation system. To classify the user's emotional information of speech, the emotional information of speech about the user's speech is extracted and classified using the PLP algorithm. After classification, an emotional DB of speech is constructed. Moreover, emotional color and emotional vocabulary through factor analysis are matched to one space in order to classify emotional information of image. And a standardized image recommendation system based on the matching of each keyword with the BM-GA algorithm for the data of the emotional information of speech and emotional information of image according to the more appropriate emotional information of speech of the user. As a result of the performance evaluation, recognition rate of standardized vocabulary in four stages according to speech was 80.48% on average and system user satisfaction was 82.4%. Therefore, it is expected that the classification of images according to the user's speech information will be helpful for the study of emotional exchange between the user and the computer.

Determining the Relative Differences of Emotional Speech Using Vocal Tract Ratio

  • Wang, Jianglin;Jo, Cheol-Woo
    • Speech Sciences
    • /
    • v.13 no.1
    • /
    • pp.109-116
    • /
    • 2006
  • In this paper, our study focuses on obtaining the differences of emotional speech in three different vocal tract sections. The vocal tract area was computed from the area function of the emotional speech. The total vocal tract was divided into 3 sections (vocal fold section, middle section and lip section) to acquire the differences in each vocal tract section of emotional speech. The experiment data include 6 emotional speeches from 3 males and 3 females. The 6 emotions consist of neutral, happiness, anger, sadness, fear and boredom. The measured difference is computed by the ratio through comparing each emotional speech with the normal speech. The experimental results present that there is not a remarkable difference at lip section, but the fear and sadness have a great change at the vocal fold part.

  • PDF

A Study on Implementation of Emotional Speech Synthesis System using Variable Prosody Model (가변 운율 모델링을 이용한 고음질 감정 음성합성기 구현에 관한 연구)

  • Min, So-Yeon;Na, Deok-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.8
    • /
    • pp.3992-3998
    • /
    • 2013
  • This paper is related to the method of adding a emotional speech corpus to a high-quality large corpus based speech synthesizer, and generating various synthesized speech. We made the emotional speech corpus as a form which can be used in waveform concatenated speech synthesizer, and have implemented the speech synthesizer that can be generated various synthesized speech through the same synthetic unit selection process of normal speech synthesizer. We used a markup language for emotional input text. Emotional speech is generated when the input text is matched as much as the length of intonation phrase in emotional speech corpus, but in the other case normal speech is generated. The BIs(Break Index) of emotional speech is more irregular than normal speech. Therefore, it becomes difficult to use the BIs generated in a synthesizer as it is. In order to solve this problem we applied the Variable Break[3] modeling. We used the Japanese speech synthesizer for experiment. As a result we obtained the natural emotional synthesized speech using the break prediction module for normal speech synthesize.

Emotional Speaker Recognition using Emotional Adaptation (감정 적응을 이용한 감정 화자 인식)

  • Kim, Weon-Goo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.7
    • /
    • pp.1105-1110
    • /
    • 2017
  • Speech with various emotions degrades the performance of the speaker recognition system. In this paper, a speaker recognition method using emotional adaptation has been proposed to improve the performance of speaker recognition system using affective speech. For emotional adaptation, emotional speaker model was generated from speaker model without emotion using a small number of training affective speech and speaker adaptation method. Since it is not easy to obtain a sufficient affective speech for training from a speaker, it is very practical to use a small number of affective speeches in a real situation. The proposed method was evaluated using a Korean database containing four emotions. Experimental results show that the proposed method has better performance than conventional methods in speaker verification and speaker recognition.

An emotional speech synthesis markup language processor for multi-speaker and emotional text-to-speech applications (다음색 감정 음성합성 응용을 위한 감정 SSML 처리기)

  • Ryu, Se-Hui;Cho, Hee;Lee, Ju-Hyun;Hong, Ki-Hyung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.523-529
    • /
    • 2021
  • In this paper, we designed and developed an Emotional Speech Synthesis Markup Language (SSML) processor. Multi-speaker emotional speech synthesis technology that can express multiple voice colors and emotional expressions have been developed, and we designed Emotional SSML by extending SSML for multiple voice colors and emotional expressions. The Emotional SSML processor has a graphic user interface and consists of following four components. First, a multi-speaker emotional text editor that can easily mark specific voice colors and emotions on desired positions. Second, an Emotional SSML document generator that creates an Emotional SSML document automatically from the result of the multi-speaker emotional text editor. Third, an Emotional SSML parser that parses the Emotional SSML document. Last, a sequencer to control a multi-speaker and emotional Text-to-Speech (TTS) engine based on the result of the Emotional SSML parser. Based on SSML which is a programming language and platform independent open standard, the Emotional SSML processor can easily integrate with various speech synthesis engines and facilitates the development of multi-speaker emotional text-to-speech applications.

F-ratio of Speaker Variability in Emotional Speech

  • Yi, So-Pae
    • Speech Sciences
    • /
    • v.15 no.1
    • /
    • pp.63-72
    • /
    • 2008
  • Various acoustic features were extracted and analyzed to estimate the inter- and intra-speaker variability of emotional speech. Tokens of vowel /a/ from sentences spoken with different modes of emotion (sadness, neutral, happiness, fear and anger) were analyzed. All of the acoustic features (fundamental frequency, spectral slope, HNR, H1-A1 and formant frequency) indicated greater contribution to inter- than intra-speaker variability across all emotions. Each acoustic feature of speech signal showed a different degree of contribution to speaker discrimination in different emotional modes. Sadness and neutral indicated greater speaker discrimination than other emotional modes (happiness, fear, anger in descending order of F-ratio). In other words, the speaker specificity was better represented in sadness and neutral than in happiness, fear and anger with any of the acoustic features.

  • PDF

Normalization in Collection Procedures of Emotional Speech by Scriptual Context (대본 내용에 의한 정서음성 수집과정의 정규화에 대하여)

  • Jo Cheol-Woo
    • Proceedings of the KSPS conference
    • /
    • 2006.05a
    • /
    • pp.123-125
    • /
    • 2006
  • One of the biggest problems unsolved in emotional speech acquisition is how to make or find a situation which is close to natual or desired state from humans. We proposed a method to collect emotional speech data by scriptual context. Several contexts from the scripts of drama were chosen by the experts in the area. Context were divided into 6 classes according to the contents. Two actors, one male and one female, read the text after recognizing the emotional situations in the script.

  • PDF

Korean Prosody Generation Based on Stem-ML (Stem-ML에 기반한 한국어 억양 생성)

  • Han, Young-Ho;Kim, Hyung-Soon
    • MALSORI
    • /
    • no.54
    • /
    • pp.45-61
    • /
    • 2005
  • In this paper, we present a method of generating intonation contour for Korean text-to-speech (TTS) system and a method of synthesizing emotional speech, both based on Soft template mark-up language (Stem-ML), a novel prosody generation model combining mark-up tags and pitch generation in one. The evaluation shows that the intonation contour generated by Stem-ML is better than that by our previous work. It is also found that Stem-ML is a useful tool for generating emotional speech, by controling limited number of tags. Large-size emotional speech database is crucial for more extensive evaluation.

  • PDF