• 제목/요약/키워드: Emotional speech

검색결과 180건 처리시간 0.024초

멀티미디어 환경을 위한 정서음성의 모델링 및 합성에 관한 연구 (Modelling and Synthesis of Emotional Speech on Multimedia Environment)

  • 조철우;김대현
    • 음성과학
    • /
    • 제5권1호
    • /
    • pp.35-47
    • /
    • 1999
  • This paper describes procedures to model and synthesize emotional speech in a multimedia environment. At first, procedures to model the visual representation of emotional speech are proposed. To display the sequences of the images in synchronized form with speech, MSF(Multimedia Speech File) format is proposed and the display software is implemented. Then the emotional speech sinal is collected and analysed to obtain the prosodic characteristics of the emotional speech in limited domain. Multi-emotional sentences are spoken by actors. From the emotional speech signals, prosodic structures are compared in terms of the pseudo-syntactic structure. Based on the analyzed result, neutral speech is transformed into a specific emotinal state by modifying the prosodic structures.

  • PDF

Recognition of Emotion and Emotional Speech Based on Prosodic Processing

  • Kim, Sung-Ill
    • The Journal of the Acoustical Society of Korea
    • /
    • 제23권3E호
    • /
    • pp.85-90
    • /
    • 2004
  • This paper presents two kinds of new approaches, one of which is concerned with recognition of emotional speech such as anger, happiness, normal, sadness, or surprise. The other is concerned with emotion recognition in speech. For the proposed speech recognition system handling human speech with emotional states, total nine kinds of prosodic features were first extracted and then given to prosodic identifier. In evaluation, the recognition results on emotional speech showed that the rates using proposed method increased more greatly than the existing speech recognizer. For recognition of emotion, on the other hands, four kinds of prosodic parameters such as pitch, energy, and their derivatives were proposed, that were then trained by discrete duration continuous hidden Markov models(DDCHMM) for recognition. In this approach, the emotional models were adapted by specific speaker's speech, using maximum a posteriori(MAP) estimation. In evaluation, the recognition results on emotional states showed that the rates on the vocal emotions gradually increased with an increase of adaptation sample number.

A Study on Image Recommendation System based on Speech Emotion Information

  • Kim, Tae Yeun;Bae, Sang Hyun
    • 통합자연과학논문집
    • /
    • 제11권3호
    • /
    • pp.131-138
    • /
    • 2018
  • In this paper, we have implemented speeches that utilized the emotion information of the user's speech and image matching and recommendation system. To classify the user's emotional information of speech, the emotional information of speech about the user's speech is extracted and classified using the PLP algorithm. After classification, an emotional DB of speech is constructed. Moreover, emotional color and emotional vocabulary through factor analysis are matched to one space in order to classify emotional information of image. And a standardized image recommendation system based on the matching of each keyword with the BM-GA algorithm for the data of the emotional information of speech and emotional information of image according to the more appropriate emotional information of speech of the user. As a result of the performance evaluation, recognition rate of standardized vocabulary in four stages according to speech was 80.48% on average and system user satisfaction was 82.4%. Therefore, it is expected that the classification of images according to the user's speech information will be helpful for the study of emotional exchange between the user and the computer.

Determining the Relative Differences of Emotional Speech Using Vocal Tract Ratio

  • Wang, Jianglin;Jo, Cheol-Woo
    • 음성과학
    • /
    • 제13권1호
    • /
    • pp.109-116
    • /
    • 2006
  • In this paper, our study focuses on obtaining the differences of emotional speech in three different vocal tract sections. The vocal tract area was computed from the area function of the emotional speech. The total vocal tract was divided into 3 sections (vocal fold section, middle section and lip section) to acquire the differences in each vocal tract section of emotional speech. The experiment data include 6 emotional speeches from 3 males and 3 females. The 6 emotions consist of neutral, happiness, anger, sadness, fear and boredom. The measured difference is computed by the ratio through comparing each emotional speech with the normal speech. The experimental results present that there is not a remarkable difference at lip section, but the fear and sadness have a great change at the vocal fold part.

  • PDF

가변 운율 모델링을 이용한 고음질 감정 음성합성기 구현에 관한 연구 (A Study on Implementation of Emotional Speech Synthesis System using Variable Prosody Model)

  • 민소연;나덕수
    • 한국산학기술학회논문지
    • /
    • 제14권8호
    • /
    • pp.3992-3998
    • /
    • 2013
  • 본 논문은 고음질의 대용량 코퍼스 기반 음성 합성기에 감정 음성 코퍼스를 추가하여 보다 다양한 합성음을 생성할 수 있는 방법에 관한 것이다. 파형 접합형 합성기에서 사용할 수 있는 형태로 감정 음성 코퍼스를 구축하여 기존의 일반 음성 코퍼스와 동일한 합성단위 선택과정을 통해 합성음을 생성할 수 있도록 구현하였다. 감정 음성 합성을 위해 태그를 사용하여 텍스트를 입력하고, 억양구 단위로 일치하는 데이터가 존재하는 경우 감정 음성으로 합성하고, 그렇지 않은 경우 일반 음성으로 합성하도록 하였다. 그리고 음성에서 운율을 구성하는 요소로 휴지기(break)가 있는데, 감정 음성의 휴지기는 일반 음성보다 불규칙한 특성이 있다. 따라서 합성기에서 생성되는 휴지기 정보를 감정 음성 합성에 그대로 사용하는 것이 어려워진다. 이 문제를 해결하기 위해 가변 휴지기(Variable break)[3] 모델링을 적용하였다. 실험은 일본어 합성기를 사용하였고, 그 결과 일반 음성의 휴지기 예측 모듈을 그대로 사용하면서 자연스러운 감정 합성음을 얻을 수 있었다.

감정 적응을 이용한 감정 화자 인식 (Emotional Speaker Recognition using Emotional Adaptation)

  • 김원구
    • 전기학회논문지
    • /
    • 제66권7호
    • /
    • pp.1105-1110
    • /
    • 2017
  • Speech with various emotions degrades the performance of the speaker recognition system. In this paper, a speaker recognition method using emotional adaptation has been proposed to improve the performance of speaker recognition system using affective speech. For emotional adaptation, emotional speaker model was generated from speaker model without emotion using a small number of training affective speech and speaker adaptation method. Since it is not easy to obtain a sufficient affective speech for training from a speaker, it is very practical to use a small number of affective speeches in a real situation. The proposed method was evaluated using a Korean database containing four emotions. Experimental results show that the proposed method has better performance than conventional methods in speaker verification and speaker recognition.

다음색 감정 음성합성 응용을 위한 감정 SSML 처리기 (An emotional speech synthesis markup language processor for multi-speaker and emotional text-to-speech applications)

  • 유세희;조희;이주현;홍기형
    • 한국음향학회지
    • /
    • 제40권5호
    • /
    • pp.523-529
    • /
    • 2021
  • 본 논문에서는 감정 마크업을 포함하는 Speech Synthesis Markup Language(SSML) 처리기를 설계하고 개발하였다. 다양한 음색과 감정 표현이 가능한 음성합성 기술이 개발되고 있으며 다양한 음색 및 감정 음성합성의 응용 확대를 위하여 표준화된 음성 인터페이스 마크업 언어인 SSML을 감정 표현이 가능하도록 확장한 감정 SSML(Emotional SSML)을 설계하였다. 감정 SSML 처리기는 그래픽 사용자 인터페이스로 손쉽게 음색 및 감정을 원하는 텍스트 부분에 표시할 수 있는 다음색 감정 텍스트 편집기, 편집 결과를 감정 SSML 문서로 생성하는 감정 SSML 문서 생성기, 생성된 감정 SSML 문서를 파싱하는 감정 SSML 파서, 감정 SSML 파서의 결과인 다음색 감정 합성 시퀀스를 기반으로 합성기와 연동하여 음성 스트림의 합성 을 제어하는 시퀀서로 구성된다. 본 논문에서 개발한 다음색 감정합성을 위한 감정 SSML 처리기는 프로그래밍 언어 및 플랫폼 독립적인 개방형 표준인 SSML을 기반으로 하여 다양한 음성합성 엔진에 쉽게 연동할 수 있는 구조를 가지며 다양한 음색과 감정 음성합성이 필요한 다양한 응용 개발에 활용될 것으로 기대한다.

F-ratio of Speaker Variability in Emotional Speech

  • Yi, So-Pae
    • 음성과학
    • /
    • 제15권1호
    • /
    • pp.63-72
    • /
    • 2008
  • Various acoustic features were extracted and analyzed to estimate the inter- and intra-speaker variability of emotional speech. Tokens of vowel /a/ from sentences spoken with different modes of emotion (sadness, neutral, happiness, fear and anger) were analyzed. All of the acoustic features (fundamental frequency, spectral slope, HNR, H1-A1 and formant frequency) indicated greater contribution to inter- than intra-speaker variability across all emotions. Each acoustic feature of speech signal showed a different degree of contribution to speaker discrimination in different emotional modes. Sadness and neutral indicated greater speaker discrimination than other emotional modes (happiness, fear, anger in descending order of F-ratio). In other words, the speaker specificity was better represented in sadness and neutral than in happiness, fear and anger with any of the acoustic features.

  • PDF

대본 내용에 의한 정서음성 수집과정의 정규화에 대하여 (Normalization in Collection Procedures of Emotional Speech by Scriptual Context)

  • 조철우
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2006년도 춘계 학술대회 발표논문집
    • /
    • pp.123-125
    • /
    • 2006
  • One of the biggest problems unsolved in emotional speech acquisition is how to make or find a situation which is close to natual or desired state from humans. We proposed a method to collect emotional speech data by scriptual context. Several contexts from the scripts of drama were chosen by the experts in the area. Context were divided into 6 classes according to the contents. Two actors, one male and one female, read the text after recognizing the emotional situations in the script.

  • PDF

Stem-ML에 기반한 한국어 억양 생성 (Korean Prosody Generation Based on Stem-ML)

  • 한영호;김형순
    • 대한음성학회지:말소리
    • /
    • 제54호
    • /
    • pp.45-61
    • /
    • 2005
  • In this paper, we present a method of generating intonation contour for Korean text-to-speech (TTS) system and a method of synthesizing emotional speech, both based on Soft template mark-up language (Stem-ML), a novel prosody generation model combining mark-up tags and pitch generation in one. The evaluation shows that the intonation contour generated by Stem-ML is better than that by our previous work. It is also found that Stem-ML is a useful tool for generating emotional speech, by controling limited number of tags. Large-size emotional speech database is crucial for more extensive evaluation.

  • PDF