Automatic speech recognition using acoustic doppler signal

초음파 도플러를 이용한 음성 인식

  • Received : 2015.07.21
  • Accepted : 2015.09.17
  • Published : 2016.01.31


In this paper, a new automatic speech recognition (ASR) was proposed where ultrasonic doppler signals were used, instead of conventional speech signals. The proposed method has the advantages over the conventional speech/non-speech-based ASR including robustness against acoustic noises and user comfortability associated with usage of the non-contact sensor. In the method proposed herein, 40 kHz ultrasonic signal was radiated toward to the mouth and the reflected ultrasonic signals were then received. Frequency shift caused by the doppler effects was used to implement ASR. The proposed method employed multi-channel ultrasonic signals acquired from the various locations, which is different from the previous method where single channel ultrasonic signal was employed. The PCA(Principal Component Analysis) coefficients were used as the features of ASR in which hidden markov model (HMM) with left-right model was adopted. To verify the feasibility of the proposed ASR, the speech recognition experiment was carried out the 60 Korean isolated words obtained from the six speakers. Moreover, the experiment results showed that the overall word recognition rates were comparable with the conventional speech-based ASR methods and the performance of the proposed method was superior to the conventional signal channel ASR method. Especially, the average recognition rate of 90 % was maintained under the noise environments.


Speech recognition;Ultrasonic doppler signals;Silent speech interface;Robust speech recognition


  1. B. Denby, T. Schultz, K. Honda, T. Hueber, J. M. Gilbert, and J. S. Brumberg, "Silent speech interfaces," Speech Comm. 52, 270-287 (2010).
  2. K.-S. Lee, "EMG-based speech recognition using Hidden Markov Models with global control variables," IEEE Trans. on Biomed. Eng. 55, 930-940 (2008).
  3. T. Toda and K. Shikano, "NAM-to-speech conversion with gaussian mixture models," in Proc. Interspeech, 1957-1960 (2005).
  4. R. Hope, S. R. Ell, M. J. Fagan, J. M. Gilbert, P. D. Green, R. K. Moore, and S. I. Rybchenko, "Small-vocabulary speech recognition using a silent speech interface based on magnetic sensing," Speech Comm. 55, 22-32 (2013).
  5. T. Hueber, G. Chollet, B. Denby, G. Dreyfus, and M. Stone, "Continuous-speech phone recognition from ultrasound and optical images of the tongue and lips," in Proc. Interspeech, 658-661 (2007).
  6. M. Jiao, G. Lu, X. Jing, S. Li, Y. Li, and J. Wang, "A novel radar sensor for the non-contact detection of speech signals," Sensors 10, 4622-4633 (2010).
  7. S. Srinivasan, B. Raj, and T. Ezzat, "Ultrasonic sensing for robust speech recognition," in Proc. ICASSP, 5102-5105 (2010).
  8. K. Kalgaonkar and B. Raj, "An acoustic doppler-based front end for hands free spoken user interfaces," in Proc. SLT, 158-161 (2006).
  9. K. Kalgaonkar and B. Raj, "Acoustic doppler sonar for gait recognition," in Proc. 2007 IEEE Conf. Advanced Video and Signal Based Surveillance, 27-32 (2007).
  10. K. Kalgaonkar and B. Raj, "One-handed gesture recognition using ultrasonic doppler sonar," Proc. ICASSP, 1889-1892 (2009).
  11. K. Kalgaonkar, R. Hu, and B. Raj, "Ultrasonic doppler sensor for voice activity detection," IEEE Signal Process. Lett. 14, 754-757 (2007).
  12. K. Kalgaonkar and B. Raj, "Ultrasonic doppler sensor for speaker recognition," in Proc. ICASSP, 4865-4868 (2008).
  13. K. Livescu, B. Zhu, and J. Glass, "On the phonetic information in ultrasonic microphone signals," in Proc. ICASSP, 4621-4624 (2009).
  14. A. R. Toth, B. Raj, K. Kalgaonkar, and T. Ezzat, "Synthesizing speech from doppler signals," in Proc. ICASSP, 4638-4641 (2010).
  15. L. R. Rabiner and B. H. Juang, Fundamentals of speech recognition (Prentice-Hall, New Jersey, 1993), pp. 69-83.


Grant : 기본연구지원

Supported by : 건국대학교