• Title/Summary/Keyword: Feature compensation

Search Result 142, Processing Time 0.036 seconds

A study on Gaussian mixture model deep neural network hybrid-based feature compensation for robust speech recognition in noisy environments (잡음 환경에 효과적인 음성 인식을 위한 Gaussian mixture model deep neural network 하이브리드 기반의 특징 보상)

  • Yoon, Ki-mu;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.6
    • /
    • pp.506-511
    • /
    • 2018
  • This paper proposes an GMM(Gaussian Mixture Model)-DNN(Deep Neural Network) hybrid-based feature compensation method for effective speech recognition in noisy environments. In the proposed algorithm, the posterior probability for the conventional GMM-based feature compensation method is calculated using DNN. The experimental results using the Aurora 2.0 framework and database demonstrate that the proposed GMM-DNN hybrid-based feature compensation method shows more effective in Known and Unknown noisy environments compared to the GMM-based method. In particular, the experiments of the Unknown environments show 9.13 % of relative improvement in the average of WER (Word Error Rate) and considerable improvements in lower SNR (Signal to Noise Ratio) conditions such as 0 and 5 dB SNR.

Speech enhancement method based on feature compensation gain for effective speech recognition in noisy environments (잡음 환경에 효과적인 음성인식을 위한 특징 보상 이득 기반의 음성 향상 기법)

  • Bae, Ara;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.1
    • /
    • pp.51-55
    • /
    • 2019
  • This paper proposes a speech enhancement method utilizing the feature compensation gain for robust speech recognition performances in noisy environments. In this paper we propose a speech enhancement method utilizing the feature compensation gain which is obtained from the PCGMM (Parallel Combined Gaussian Mixture Model)-based feature compensation method employing variational model composition. The experimental results show that the proposed method significantly outperforms the conventional front-end algorithms and our previous research over various background noise types and SNR (Signal to Noise Ratio) conditions in mismatched ASR (Automatic Speech Recognition) system condition. The computation complexity is significantly reduced by employing the noise model selection technique with maintaining the speech recognition performance at a similar level.

ARMA Filtering of Speech Features Using Energy Based Weights (에너지 기반 가중치를 이용한 음성 특징의 자동회귀 이동평균 필터링)

  • Ban, Sung-Min;Kim, Hyung-Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.2
    • /
    • pp.87-92
    • /
    • 2012
  • In this paper, a robust feature compensation method to deal with the environmental mismatch is proposed. The proposed method applies energy based weights according to the degree of speech presence to the Mean subtraction, Variance normalization, and ARMA filtering (MVA) processing. The weights are further smoothed by the moving average and maximum filters. The proposed feature compensation algorithm is evaluated on AURORA 2 task and distant talking experiment using the robot platform, and we obtain error rate reduction of 14.4 % and 44.9 % by using the proposed algorithm comparing with MVA processing on AURORA 2 task and distant talking experiment, respectively.

An Improvement of Korean Speech Recognition Using a Compensation of the Speaking Rate by the Ratio of a Vowel length (모음길이 비율에 따른 발화속도 보상을 이용한 한국어 음성인식 성능향상)

  • 박준배;김태준;최성용;이정현
    • Proceedings of the IEEK Conference
    • /
    • 2003.11b
    • /
    • pp.195-198
    • /
    • 2003
  • The accuracy of automatic speech recognition system depends on the presence of background noise and speaker variability such as sex, intonation of speech, and speaking rate. Specially, the speaking rate of both inter-speaker and intra-speaker is a serious cause of mis-recognition. In this paper, we propose the compensation method of the speaking rate by the ratio of each vowel's length in a phrase. First the number of feature vectors in a phrase is estimated by the information of speaking rate. Second, the estimated number of feature vectors is assigned to each syllable of the phrase according to the ratio of its vowel length. Finally, the process of feature vector extraction is operated by the number that assigned to each syllable in the phrase. As a result the accuracy of automatic speech recognition was improved using the proposed compensation method of the speaking rate.

  • PDF

Robust Speech Recognition Using Weighted Auto-Regressive Moving Average Filter (가중 ARMA 필터를 이용한 강인한 음성인식)

  • Ban, Sung-Min;Kim, Hyung-Soon
    • Phonetics and Speech Sciences
    • /
    • v.2 no.4
    • /
    • pp.145-151
    • /
    • 2010
  • In this paper, a robust feature compensation method is proposed for improving the performance of speech recognition. The proposed method is incorporated into the auto-regressive moving average (ARMA) based feature compensation. We employ variable weights for the ARMA filter according to the degree of speech activity, and pass the normalized cepstral sequence through the weighted ARMA filter. Additionally when normalizing the cepstral sequences in training, the cepstral means and variances are estimated from total training utterances. Experimental results show the proposed method significantly improves the speech recognition performance in the noisy and reverberant environments.

  • PDF

A Facial Feature Detection using Light Compensation and Appearance-based Features (빛 보상과 외형 기반의 특징을 이용한 얼굴 특징 검출)

  • Kim Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.7 no.3
    • /
    • pp.143-153
    • /
    • 2006
  • Facial feature detection is a basic technology in applications such as human computer interface, face recognition, face tracking and image database management. The speed of feature detection algorithm is one of the main issues for facial feature detection in real-time environment. Primary factors like a variation by lighting effect, location, rotation and complex background give an effect to decrease a detection ratio. A facial feature detection algorithm is proposed to improve the detection ratio and the detection speed. The proposed algorithm detects skin regions over the entire image improved by CLAHE, an algorithm for light compensation against varying lighting conditions. To extract facial feature points on detected skin regions, it uses appearance-based geometrical characteristics of a face. Since the method shows fast detection speed as well as efficient face-detection ratio, it can be applied in real-time application to face tracking and face recognition.

  • PDF

Acoustic Channel Compensation at Mel-frequency Spectrum Domain

  • Jeong, So-Young;Oh, Sang-Hoon;Lee, Soo-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.1E
    • /
    • pp.43-48
    • /
    • 2003
  • The effects of linear acoustic channels have been analyzed and compensated at mel-frequency feature domain. Unlike popular RASTA filtering our approach incorporates separate filters for each mel-frequency band, which results in better recognition performance for heavy-reverberated speeches.

Noisy Speech Recognition Based on Noise-Adapted HMMs Using Speech Feature Compensation

  • Chung, Yong-Joo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.15 no.2
    • /
    • pp.37-41
    • /
    • 2014
  • The vector Taylor series (VTS) based method usually employs clean speech Hidden Markov Models (HMMs) when compensating speech feature vectors or adapting the parameters of trained HMMs. It is well-known that noisy speech HMMs trained by the Multi-condition TRaining (MTR) and the Multi-Model-based Speech Recognition framework (MMSR) method perform better than the clean speech HMM in noisy speech recognition. In this paper, we propose a method to use the noise-adapted HMMs in the VTS-based speech feature compensation method. We derived a novel mathematical relation between the train and the test noisy speech feature vector in the log-spectrum domain and the VTS is used to estimate the statistics of the test noisy speech. An iterative EM algorithm is used to estimate train noisy speech from the test noisy speech along with noise parameters. The proposed method was applied to the noise-adapted HMMs trained by the MTR and MMSR and could reduce the relative word error rate significantly in the noisy speech recognition experiments on the Aurora 2 database.

PCMM-Based Feature Compensation Method Using Multiple Model to Cope with Time-Varying Noise (시변 잡음에 대처하기 위한 다중 모델을 이용한 PCMM 기반 특징 보상 기법)

  • 김우일;고한석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.6
    • /
    • pp.473-480
    • /
    • 2004
  • In this paper we propose an effective feature compensation scheme based on the speech model in order to achieve robust speech recognition. The proposed feature compensation method is based on parallel combined mixture model (PCMM). The previous PCMM works require a highly sophisticated procedure for estimation of the combined mixture model in order to reflect the time-varying noisy conditions at every utterance. The proposed schemes can cope with the time-varying background noise by employing the interpolation method of the multiple mixture models. We apply the‘data-driven’method to PCMM tot move reliable model combination and introduce a frame-synched version for estimation of environments posteriori. In order to reduce the computational complexity due to multiple models, we propose a technique for mixture sharing. The statistically similar Gaussian components are selected and the smoothed versions are generated for sharing. The performance is examined over Aurora 2.0 and speech corpus recorded while car-driving. The experimental results indicate that the proposed schemes are effective in realizing robust speech recognition and reducing the computational complexities under both simulated environments and real-life conditions.

Robust Speech Recognition using Noise Compensation Method Based on Eigen - Environment (Eigen - Environment 잡음 보상 방법을 이용한 강인한 음성인식)

  • Song Hwa Jeon;Kim Hyung Soon
    • MALSORI
    • /
    • no.52
    • /
    • pp.145-160
    • /
    • 2004
  • In this paper, a new noise compensation method based on the eigenvoice framework in feature space is proposed to reduce the mismatch between training and testing environments. The difference between clean and noisy environments is represented by the linear combination of K eigenvectors that represent the variation among environments. In the proposed method, the performance improvement of speech recognition systems is largely affected by how to construct the noisy models and the bias vector set. In this paper, two methods, the one based on MAP adaptation method and the other using stereo DB, are proposed to construct the noisy models. In experiments using Aurora 2 DB, we obtained 44.86% relative improvement with eigen-environment method in comparison with baseline system. Especially, in clean condition training mode, our proposed method yielded 66.74% relative improvement, which is better performance than several methods previously proposed in Aurora project.

  • PDF