• Title/Summary/Keyword: Noisy environments

Search Result 283, Processing Time 0.02 seconds

Speech Recognition by Integrating Audio, Visual and Contextual Features Based on Neural Networks (신경망 기반 음성, 영상 및 문맥 통합 음성인식)

  • 김명원;한문성;이순신;류정우
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.67-77
    • /
    • 2004
  • The recent research has been focused on fusion of audio and visual features for reliable speech recognition in noisy environments. In this paper, we propose a neural network based model of robust speech recognition by integrating audio, visual, and contextual information. Bimodal Neural Network(BMNN) is a multi-layer perception of 4 layers, each of which performs a certain level of abstraction of input features. In BMNN the third layer combines audio md visual features of speech to compensate loss of audio information caused by noise. In order to improve the accuracy of speech recognition in noisy environments, we also propose a post-processing based on contextual information which are sequential patterns of words spoken by a user. Our experimental results show that our model outperforms any single mode models. Particularly, when we use the contextual information, we can obtain over 90% recognition accuracy even in noisy environments, which is a significant improvement compared with the state of art in speech recognition. Our research demonstrates that diverse sources of information need to be integrated to improve the accuracy of speech recognition particularly in noisy environments.

Noise Reduction Using the Standard Deviation of the Time-Frequency Bin and Modified Gain Function for Speech Enhancement in Stationary and Nonstationary Noisy Environments

  • Lee, Soo-Jeong;Kim, Soon-Hyob
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.3E
    • /
    • pp.87-96
    • /
    • 2007
  • In this paper we propose a new noise reduction algorithm for stationary and nonstationary noisy environments. Our algorithm classifies the speech and noise signal contributions in time-frequency bins, and is not based on a spectral algorithm or a minimum statistics approach. It relies on calculating the ratio of the standard deviation of the noisy power spectrum in time-frequency bins to its normalized time-frequency average. We show that good quality can be achieved for enhancement speech signal by choosing appropriate values for ${\delta}_t\;and\;{\delta}_f$. The proposed method greatly reduces the noise while providing enhanced speech with lower residual noise and somewhat higher mean opinion score (MOS), background intrusiveness (BAK) and signal distortion (SIG) scores than conventional methods.

Syllable-Type-Based Phoneme Weighting Techniques for Listening Intelligibility in Noisy Environments (소음 환경에서의 명료한 청취를 위한 음절형태 기반 음소 가중 기술)

  • Lee, Young Ho;Joo, Jong Han;Choi, Seung Ho
    • Phonetics and Speech Sciences
    • /
    • v.6 no.3
    • /
    • pp.165-169
    • /
    • 2014
  • Intelligibility of speech transmitted to listeners can significantly be degraded in noisy environments such as in auditorium and in train station due to ambient noises. Noise-masked speech signal is hard to be recognized by listeners. Among the conventional methods to improve speech intelligibility, consonant-vowel intensity ratio (CVR) approach reinforces the powers of overall consonants. However, excessively reinforced consonant is not helpful in recognition. Furthermore, only some of consonants are improved by the CVR approach. In this paper, we propose the corrective weighting (CW) approach that reinforces the powers of consonants according to syllable-type such as consonant-vowel-consonant (CVC), consonant-vowel (CV) and vowel-consonant (VC) in Korean differently, considering the level of listeners' recognition. The proposed CW approach was evaluated by the subjective test, Comparison Category Rating (CCR) test of ITU-T P.800, showed better performance, that is, 0.18 and 0.24 higher than the unprocessed CVR approach, respectively.

Selective pole filtering based feature normalization for performance improvement of short utterance recognition in noisy environments (잡음 환경에서 짧은 발화 인식 성능 향상을 위한 선택적 극점 필터링 기반의 특징 정규화)

  • Choi, Bo Kyeong;Ban, Sung Min;Kim, Hyung Soon
    • Phonetics and Speech Sciences
    • /
    • v.9 no.2
    • /
    • pp.103-110
    • /
    • 2017
  • The pole filtering concept has been successfully applied to cepstral feature normalization techniques for noise-robust speech recognition. In this paper, it is proposed to apply the pole filtering selectively only to the speech intervals, in order to further improve the recognition performance for short utterances in noisy environments. Experimental results on AURORA 2 task with clean-condition training show that the proposed selectively pole-filtered cepstral mean normalization (SPFCMN) and selectively pole-filtered cepstral mean and variance normalization (SPFCMVN) yield error rate reduction of 38.6% and 45.8%, respectively, compared to the baseline system.

Auditory Representations for Robust Speech Recognition in Noisy Environments (잡음 환경에서의 음성 인식을 위한 청각 표현)

  • Kim, Doh-Suk;Lee, Soo-Young;Kil, Rhee-M.
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.5
    • /
    • pp.90-98
    • /
    • 1996
  • An auditory model is proposed for robust speech recognition in noisy environments. The model consists of cochlear bandpass filters and nonlinear stages, and represents frequency and intensity information efficiently even in noisy environments. Frequency information of the signal is obtained by zero-crossing intervals, and intensity information is also incorporated by peak detectors and saturating nonlinearities. Also, the robustness of the zero-crossings in estimating frequency is verified by the developed analytic relationship of the variance of the level-crossing interval perturbations as a function of the crossing level values. The proposed auditory model is computationally efficient and free from many unknown parameters compared with other auditory models. Speaker-independent speech recognition experiments demonstrate the robustness of the proposed method.

  • PDF

A Study on the Robust Bimodal Speech-recognition System in Noisy Environments (잡음 환경에 강인한 이중모드 음성인식 시스템에 관한 연구)

  • 이철우;고인선;계영철
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.1
    • /
    • pp.28-34
    • /
    • 2003
  • Recent researches have been focusing on jointly using lip motions (i.e. visual speech) and speech for reliable speech recognitions in noisy environments. This paper also deals with the method of combining the result of the visual speech recognizer and that of the conventional speech recognizer through putting weights on each result: the paper proposes the method of determining proper weights for each result and, in particular, the weights are autonomously determined, depending on the amounts of noise in the speech and the image quality. Simulation results show that combining the audio and visual recognition by the proposed method provides the recognition performance of 84% even in severely noisy environments. It is also shown that in the presence of blur in images, the newly proposed weighting method, which takes the blur into account as well, yields better performance than the other methods.

Robust Speech Endpoint Detection in Noisy Environments for HRI (Human-Robot Interface) (인간로봇 상호작용을 위한 잡음환경에 강인한 음성 끝점 검출 기법)

  • Park, Jin-Soo;Ko, Han-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.2
    • /
    • pp.147-156
    • /
    • 2013
  • In this paper, a new speech endpoint detection method in noisy environments for moving robot platforms is proposed. In the conventional method, the endpoint of speech is obtained by applying an edge detection filter that finds abrupt changes in the feature domain. However, since the feature of the frame energy is unstable in such noisy environments, it is difficult to accurately find the endpoint of speech. Therefore, a novel feature extraction method based on the twice-iterated fast fourier transform (TIFFT) and statistical models of speech is proposed. The proposed feature extraction method was applied to an edge detection filter for effective detection of the endpoint of speech. Representative experiments claim that there was a substantial improvement over the conventional method.

Voice Activity Detection Using Global Speech Absence Probability Based on Teager Energy in Noisy Environments (잡음환경에서 Teager Energy 기반의 전역 음성부재확률을 이용하는 음성검출)

  • Park, Yun-Sik;Lee, Sang-Min
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.1
    • /
    • pp.97-103
    • /
    • 2012
  • In this paper, we propose a novel voice activity detection (VAD) algorithm to effectively distinguish speech from nonspeech in various noisy environments. Global speech absence probability (GSAP) derived from likelihood ratio (LR) based on the statistical model is widely used as the feature parameter for VAD. However, the feature parameter based on conventional GSAP is not sufficient to distinguish speech from noise at low SNRs (signal-to-noise ratios). The presented VAD algorithm utilizes GSAP based on Teager energy (TE) as the feature parameter to provide the improved performance of decision for speech segments in noisy environment. Performances of the proposed VAD algorithm are evaluated by objective test under various environments and better results compared with the conventional methods are obtained.

Impedance-based Long-term Structural Health Monitoring for Tidal Current Power Plant Structure in Noisy Environments (잡음 환경 하에서의 전기-역학적 임피던스 기반 조류발전 구조물의 장기 건전성 모니터링)

  • Min, Ji-Young;Shim, Hyo-Jin;Yun, Chung-Bang;Yi, Jin-Hak
    • Journal of Ocean Engineering and Technology
    • /
    • v.25 no.4
    • /
    • pp.59-65
    • /
    • 2011
  • In structural health monitoring (SHM) using electro-mechanical impedance signatures, it is a critical issue for extremely large structures to extract the best damage diagnosis results, while minimizing unknown environmental effects, including temperature, humidity, and acoustic vibration. If the impedance signatures fluctuate because of these factors, these fluctuations should be eliminated because they might hide the characteristics of the host structural damages. This paper presents a long-term SHM technique under an unknown noisy environment for tidal current power plant structures. The obtained impedance signatures contained significant variations during the measurements, especially in the audio frequency range. To eliminate these variations, a continuous principal component analysis was applied, and the results were compared with the conventional approach using the RMSD (Root Mean Square Deviation) and CC (Cross-correlation Coefficient) damage indices. Finally, it was found that this approach could be effectively used for long-term SHM in noisy environments.

A Study on the Noisy Speech Recognition Based on Multi-Model Structure Using an Improved Jacobian Adaptation (향상된 JA 방식을 이용한 다 모델 기반의 잡음음성인식에 대한 연구)

  • Chung, Yong-Joo
    • Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.75-84
    • /
    • 2006
  • Various methods have been proposed to overcome the problem of speech recognition in the noisy conditions. Among them, the model compensation methods like the parallel model combination (PMC) and Jacobian adaptation (JA) have been found to perform efficiently. The JA is quite effective when we have hidden Markov models (HMMs) already trained in a similar condition as the target environment. In a previous work, we have proposed an improved method for the JA to make it more robust against the changing environments in recognition. In this paper, we further improved its performance by compensating the delta-mean vectors and covariance matrices of the HMM and investigated its feasibility in the multi-model structure for the noisy speech recognition. From the experimental results, we could find that the proposed improved the robustness of the JA and the multi-model approach could be a viable solution in the noisy speech recognition.

  • PDF