• Title/Summary/Keyword: Voice Recognition

Search Result 643, Processing Time 0.037 seconds

A Study on Development and Real-Time Implementation of Voice Recognition Algorithm (화자독립방식에 의한 음성인식 알고리즘 개발 및 실시간 실현에 관한 연구)

  • Jung, Yang-geun;Jo, Sang Young;Yang, Jun Seok;Park, In-Man;Han, Sung Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.18 no.4
    • /
    • pp.250-258
    • /
    • 2015
  • In this research, we proposed a new approach to implement the real-time motion control of biped robot based on voice command for unmanned FA. Voice is one of convenient methods to communicate between human and robots. To command a lot of robot task by voice, voice of the same number have to be able to be recognition voice is, the higher the time of recognition is. In this paper, a practical voice recognition system which can recognition a lot of task commands is proposed. The proposed system consists of a general purpose microprocessor and a useful voice recognition processor which can recognize a limited number of voice patterns. Given biped robots, each robot task is, classified and organized such that the number of robot tasks under each directory is net more than the maximum recognition number of the voice recognition processor so that robot tasks under each directory can be distinguished by the voice recognition command. By simulation and experiment, it was illustrated the reliability of voice recognition rates for application of the manufacturing process.

The Structural Relationships of between AI-based Voice Recognition Service Characteristics, Interactivity and Intention to Use (AI기반 음성인식 서비스 특성과 상호 작용성 및 이용 의도 간의 구조적 관계)

  • Lee, SeoYoung
    • Journal of Information Technology Services
    • /
    • v.20 no.5
    • /
    • pp.189-207
    • /
    • 2021
  • Voice interaction combined with artificial intelligence is poised to revolutionize human-computer interactions with the advent of virtual assistants. This paper is analyzing interactive elements of AI-based voice recognition services such as sympathy, assurance, intimacy, and trust on intention to use. The questionnaire was carried out for 284 smartphone/smart TV users in Korea. The collected data was analyzed by structural equation model analysis and bootstrapping. The key results are as follows. First, AI-based voice recognition service characteristics such as sympathy, assurance, intimacy, and trust have positive effects on interactivity with the AI-based voice recognition service. Second, the interactivity with the AI-based voice recognition service has positive effects on intention to use. Third, AI-based voice recognition service characteristics such as interactional enjoyment and intimacy have directly positive effects on intention to use. Fourth, AI-based voice recognition service characteristics such as sympathy, assurance, intimacy and trust have indirectly positive effects on intention to use the AI-based voice recognition service by mediating the effect of the interactivity with the AI-based voice recognition service. It is meaningful to investigate factors affecting the interactivity and intention to use voice recognition assistants. It has practical and academic implications.

A Study on Realization of Speech Recognition System based on VoiceXML for Railroad Reservation Service (철도예약서비스를 위한 VoiceXML 기반의 음성인식 구현에 관한 연구)

  • Kim, Beom-Seung;Kim, Soon-Hyob
    • Journal of the Korean Society for Railway
    • /
    • v.14 no.2
    • /
    • pp.130-136
    • /
    • 2011
  • This paper suggests realization method for real-time speech recognition using VoiceXML in telephony environment based on SIP for Railroad Reservation Service. In this method, voice signal incoming through PSTN or Internet is treated as dialog using VoiceXML and the transferred voice signal is processed by Speech Recognition System, and the output is returned to dialog of VoiceXML which is transferred to users. VASR system is constituted of dialog server which processes dialog, APP server for processing voice signal, and Speech Recognition System to process speech recognition. This realizes transfer method to Speech Recognition System in which voice signal is recorded using Record Tag function of VoiceXML to process voice signal in telephony environment and it is played in real time.

Voice Recognition Performance Improvement using the Convergence of Bayesian method and Selective Speech Feature (베이시안 기법과 선택적 음성특징 추출을 융합한 음성 인식 성능 향상)

  • Hwang, Jae-Chun
    • Journal of the Korea Convergence Society
    • /
    • v.7 no.6
    • /
    • pp.7-11
    • /
    • 2016
  • Voice recognition systems which use a white noise and voice recognition environment are not correct voice recognition with variable voice mixture. Therefore in this paper, we propose a method using the convergence of Bayesian technique and selecting voice for effective voice recognition. we make use of bank frequency response coefficient for selective voice extraction, Using variables observed for the combination of all the possible two observations for this purpose, and has an voice signal noise information to the speech characteristic extraction selectively is obtained by the energy ratio on the output. It provide a noise elimination and recognition rates are improved with combine voice recognition of bayesian methode. The result which we confirmed that the recognition rate of 2.3% is higher than HMM and CHMM methods in vocabulary recognition, respectively.

A Voice Command System for Autonomous Robots

  • Hong, Soon-Hyuk;Jeon, Jae-Wook
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.3 no.1
    • /
    • pp.51-57
    • /
    • 2001
  • How to promote students interest is very important in undergraduate engineering education. One of the techniques for achieving this is select appropriate projects and to integrated them with regular courses. In this paper, a voice recognition system for autonomous robots is proposed as a project to educate students about microprocessors efficiently. The proposed system consists of a microprocessor and a voice recognition processor that can recognize a limited unmber of voice patterns. The commands of autono-mous robots are classified and are organized such that one voice recognition processor can distinguish robot commands under each directory. Thus. the proposed system can distinguish more voice commands than one voice recognition processor can. A voice com-mand systems for three autonomous robots is implemented with a microprocessor Inter 80CI196KC and a voice recognition processor HM2007. The advantages in integrating this system with regular courses are also described.

  • PDF

Control System for Smart Medical Illumination Based on Voice Recognition (음성인식기반 스마트 의료조명 제어시스템)

  • Kim, Min-Kyu;Lee, Soo-In;Cho, Hyun-Kil
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.8 no.3
    • /
    • pp.179-184
    • /
    • 2013
  • A voice recognition technology as a technology fundament plays an important role in medical devices with smart functions. This paper describes the implementation of a control system that can be utilized as a part of illumination equipment for medical applications (IEMA) based on a voice recognition. The control system can essentially be divided into five parts, the microphone, training part, recognition part, memory part, and control part. The system was implemented using the RSC-4x evaluation board which is included the micro-controller for voice recognition. To investigate the usefulness of the implemented control system, the experiments of the recognition rate was carried out according to the input distance for voice recognition. As a result, the recognition rate of the control system was more than 95% within a distance between 0.5 and 2m. The result verified that the implemented control system performs well as the smart control system based for an IEMA.

The small scale Voice Dialing System using TMS320C30 (TMS320C30을 이용한 소규모 Voice Dialing 시스템)

  • 이항섭
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1991.06a
    • /
    • pp.58-63
    • /
    • 1991
  • This paper describes development of small scale voice dialing system using TMS320C30. Recognition vocabuliary is used 50 department name within university. In vocabulary below the middle scale, word unit recognition is more practice than phoneme unit or syllable unit recognition. In this paper, we performend recognition and model generation using DMS(Dynamic Multi-Section) and implemeted voice dialing system using TMS320C30. As a result of recognition, we achieved a 98% recognition rate in condition of section 22 and weight 0.6 and recognition time took 4 seconds.

  • PDF

Voice Recognition-Based on Adaptive MFCC and Deep Learning for Embedded Systems (임베디드 시스템에서 사용 가능한 적응형 MFCC 와 Deep Learning 기반의 음성인식)

  • Bae, Hyun Soo;Lee, Ho Jin;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.10
    • /
    • pp.797-802
    • /
    • 2016
  • This paper proposes a noble voice recognition method based on an adaptive MFCC and deep learning for embedded systems. To enhance the recognition ratio of the proposed voice recognizer, ambient noise mixed into the voice signal has to be eliminated. However, noise filtering processes, which may damage voice data, diminishes the recognition ratio. In this paper, a filter has been designed for the frequency range within a voice signal, and imposed weights are used to reduce data deterioration. In addition, a deep learning algorithm, which does not require a database in the recognition algorithm, has been adapted for embedded systems, which inherently require small amounts of memory. The experimental results suggest that the proposed deep learning algorithm and HMM voice recognizer, utilizing the proposed adaptive MFCC algorithm, perform better than conventional MFCC algorithms in its recognition ratio within a noisy environment.

Development of Language Study Machine Using Voice Recognition Technology (음성인식 기술을 이용한 대화식 언어 학습기 개발)

  • Yoo, Jae-Tack;Yoon, Tae-Seob
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.201-203
    • /
    • 2005
  • The best method to study language is to talking with a native speaker. A voice recognition technology can be used to develope a language study machine. SD(Speaker dependant) and SI(speaker independant) voice recognition method is used for the language study machine. MP3 Player. FM Radio. Alarm clock functions are added to enhance the value of the product. The machine is designed with a DSP(Digital Signal Processing) chip for voice recognition. MP3 encoder/decoder chip. FM tumer and SD flash memory card. This paper deals with the application of SD ad SD voice recognition. flash memory file system. PC download function using USB ports, English conversation text function by the use of SD flash memory. LCD display control. MP3 encoding and decoding, etc. The study contents are saved in SD flash memory. This machine can be helpful from child to adult by changing the SD flash memory.

  • PDF

Development of a Work Management System Based on Speech and Speaker Recognition

  • Gaybulayev, Abdulaziz;Yunusov, Jahongir;Kim, Tae-Hyong
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.3
    • /
    • pp.89-97
    • /
    • 2021
  • Voice interface can not only make daily life more convenient through artificial intelligence speakers but also improve the working environment of the factory. This paper presents a voice-assisted work management system that supports both speech and speaker recognition. This system is able to provide machine control and authorized worker authentication by voice at the same time. We applied two speech recognition methods, Google's Speech application programming interface (API) service, and DeepSpeech speech-to-text engine. For worker identification, the SincNet architecture for speaker recognition was adopted. We implemented a prototype of the work management system that provides voice control with 26 commands and identifies 100 workers by voice. Worker identification using our model was almost perfect, and the command recognition accuracy was 97.0% in Google API after post- processing and 92.0% in our DeepSpeech model.