• Title/Summary/Keyword: Voice Translation

Search Result 21, Processing Time 0.021 seconds

A Study of Hybrid Automatic Interpret Support System (하이브리드 자동 통역지원 시스템에 관한 연구)

  • Lim, Chong-Gyu;Gang, Bong-Gyun;Park, Ju-Sik;Kang, Bong-Kyun
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.28 no.3
    • /
    • pp.133-141
    • /
    • 2005
  • The previous research has been mainly focused on individual technology of voice recognition, voice synthesis, translation, and bone transmission technical. Recently, commercial models have been produced using aforementioned technologies. In this research, a new automated translation support system concept has been proposed by combining established technology of bone transmission and wireless system. The proposed system has following three major components. First, the hybrid system consist of headset, bone transmission and other technologies will recognize user's voice. Second, computer recognized voice (using small server attached to the user) of the user will be converted into digital signal. Then it will be translated into other user's language by translation algorithm. Third, the translated language will be wirelessly transmitted to the other party. The transmitted signal will be converted into voice in the other party's computer using the hybrid system. This hybrid system will transmit the clear message regardless of the noise level in the environment or user's hearing ability. By using the network technology, communication between users can also be clearly transmitted despite the distance.

Design of Metaverse for Two-Way Video Conferencing Platform Based on Virtual Reality

  • Yoon, Dongeon;Oh, Amsuk
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.189-194
    • /
    • 2022
  • As non-face-to-face activities have become commonplace, online video conferencing platforms have become popular collaboration tools. However, existing video conferencing platforms have a structure in which one side unilaterally exchanges information, potentially increase the fatigue of meeting participants. In this study, we designed a video conferencing platform utilizing virtual reality (VR), a metaverse technology, to enable various interactions. A virtual conferencing space and realistic VR video conferencing content authoring tool support system were designed using Meta's Oculus Quest 2 hardware, the Unity engine, and 3D Max software. With the Photon software development kit, voice recognition was designed to perform automatic text translation with the Watson application programming interface, allowing the online video conferencing participants to communicate smoothly even if using different languages. It is expected that the proposed video conferencing platform will enable conference participants to interact and improve their work efficiency.

Sign Language Image Recognition System Using Artificial Neural Network

  • Kim, Hyung-Hoon;Cho, Jeong-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.2
    • /
    • pp.193-200
    • /
    • 2019
  • Hearing impaired people are living in a voice culture area, but due to the difficulty of communicating with normal people using sign language, many people experience discomfort in daily life and social life and various disadvantages unlike their desires. Therefore, in this paper, we study a sign language translation system for communication between a normal person and a hearing impaired person using sign language and implement a prototype system for this. Previous studies on sign language translation systems for communication between normal people and hearing impaired people using sign language are classified into two types using video image system and shape input device. However, existing sign language translation systems have some problems that they do not recognize various sign language expressions of sign language users and require special devices. In this paper, we use machine learning method of artificial neural network to recognize various sign language expressions of sign language users. By using generalized smart phone and various video equipment for sign language image recognition, we intend to improve the usability of sign language translation system.

Translating English By-Phrase Passives into Korean: A Parallel Corpus Analysis (영한 병렬 코퍼스에 나타난 영어 수동문의 한국어 번역)

  • Lee, Seung-Ah
    • Journal of English Language & Literature
    • /
    • v.56 no.5
    • /
    • pp.871-905
    • /
    • 2010
  • This paper is motivated by Watanabe's (2001) observation that English byphrase passives are sometimes translated into Japanese object topicalization constructions. That is, the original English sentence in the passive may be translated into the active voice with the logical object topicalized. A number of scholars, including Chomsky (1981) and Baker (1992), have remarked that languages have various ways to avoid focusing on the logical subject. The aim of the present study is to examine the translation equivalents of the English by-phrase passives in an English-Korean parallel corpus compiled by the author. A small sample of articles from Newsweek magazine and its published Korean translation reveals that there are indeed many ways to translate English by-phrase passives, including object topicalization (12.5%). Among the 64 translated sentences analyzed and classified, 12 (18.8%) examples were problematic in terms of agent defocusing, which is the primary function of passives. Of these 12 instances, five cases were identified where an alternative translation would be more suitable. The results suggest that the functional characteristics of English by-phrase passives should be highlighted in translator training as well as language teaching.

Text/Voice Recognition & Translation Application Development Using Open-Source (오픈소스를 이용한 문자/음성 인식 및 번역 앱 개발)

  • Yun, Tae-Jin;Seo, Hyo-Jong;Kim, Do-Heon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2017.07a
    • /
    • pp.425-426
    • /
    • 2017
  • 본 논문에서는 Google에서 지원하는 오픈소스인 Tesseract-OCR을 이용한 문자/음성 인식 및 번역 앱에 대해 제안한다. 최근 한국어를 포함한 외국어 인식과 번역기능을 이용한 다양한 스마트폰 앱이 개발되어 여행에 필수품으로 자리잡고 있다. 스마트폰의 카메라기능을 이용하여 촬영한 영상을 인식률을 높이도록 처리하고, Crop기능을 넣어 부분 인식기능을 지원하며, Tesseract-OCR의 train data를 보완하여 인식률을 높이고, Google 음성인식 API를 이용한 음성인식 기능을 통해 인식된 유사한 문장들을 선택하도록 하고, 이를 번역하고 보여주도록 개발하였다. 번역 기능은 번역대상 언어와 번역할 언어를 선택할 수 있고 기본적으로 영어, 한국어, 일본어, 중국어로 번역이 가능하다. 이 기능을 이용하여 차량번호 인식, 사진에 포함된 글자를 통한 검색 등 다양한 응용분야에 맞게 앱을 개발할 수 있다.

  • PDF

Signalling Protocol Validation of Internet-ISDN Interworking Gateway for Voice Telephony (음성 전화를 위한 Internet-ISDN 연동 게이트웨이 신호 프로토콜 검증)

  • Yu, Sang-Sin
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.10
    • /
    • pp.2740-2751
    • /
    • 1999
  • Critical to more widespread use of Internet telephony are the smooth interoperability with the existing telephone network and the improved quality of voice connections. Of these requirements, this interoperability comes through the use of Internet Telephony Gateway's which perform protocol translation between an IP network and the Public Switched Telephone Network. In this paper, we have focused on the necessity and possibility of interoperability, and furthermore derives the necessary requirements for interoperability between IP networks and PSTN. For this purpose, we have analyzed the signaling protocols for gateway system. Then, we have modelled the inter-working part using the Petri-Net model. Through reachability trees of the Petri-Net model, we have confirmed that interoperability is possible, and that characteristics of deadlock, liveness, and boundness are satisfied.

  • PDF

A Study on Finger Language Translation System using Machine Learning and Leap Motion (머신러닝과 립 모션을 활용한 지화 번역 시스템 구현에 관한 연구)

  • Son, Da Eun;Go, Hyeong Min;Shin, Haeng yong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.552-554
    • /
    • 2019
  • Deaf mutism (a hearing-impaired person and speech disorders) communicates using sign language. There are difficulties in communicating by voice. However, sign language can only be limited in communicating with people who know sign language because everyone doesn't use sign language when they communicate. In this paper, a finger language translation system is proposed and implemented as a means for the disabled and the non-disabled to communicate without difficulty. The proposed algorithm recognizes the finger language data by leap motion and self-learns the data using machine learning technology to increase recognition rate. We show performance improvement from the simulation results.

OnDot: Braille Training System for the Blind (시각장애인을 위한 점자 교육 시스템)

  • Kim, Hak-Jin;Moon, Jun-Hyeok;Song, Min-Uk;Lee, Se-Min;Kong, Ki-sok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.41-50
    • /
    • 2020
  • This paper deals with the Braille Education System which complements the shortcomings of the existing Braille Learning Products. An application dedicated to the blind is configured to perform full functions through touch gestures and voice guidance for user convenience. Braille kit is produced for educational purposes through Arduino and 3D printing. The system supports the following functions. First, the learning of the most basic braille, such as initial consonants, final consonant, vowels, abbreviations, etc. Second, the ability to check learned braille by solving step quizzes. Third, translation of braille. Through the experiment, the recognition rate of touch gestures and the accuracy of braille expression were confirmed, and in case of translation, the translation was done as intended. The system allows blind people to learn braille efficiently.

Visual Voice Activity Detection and Adaptive Threshold Estimation for Speech Recognition (음성인식기 성능 향상을 위한 영상기반 음성구간 검출 및 적응적 문턱값 추정)

  • Song, Taeyup;Lee, Kyungsun;Kim, Sung Soo;Lee, Jae-Won;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.4
    • /
    • pp.321-327
    • /
    • 2015
  • In this paper, we propose an algorithm for achieving robust Visual Voice Activity Detection (VVAD) for enhanced speech recognition. In conventional VVAD algorithms, the motion of lip region is found by applying an optical flow or Chaos inspired measures for detecting visual speech frames. The optical flow-based VVAD is difficult to be adopted to driving scenarios due to its computational complexity. While invariant to illumination changes, Chaos theory based VVAD method is sensitive to motion translations caused by driver's head movements. The proposed Local Variance Histogram (LVH) is robust to the pixel intensity changes from both illumination change and translation change. Hence, for improved performance in environmental changes, we adopt the novel threshold estimation using total variance change. In the experimental results, the proposed VVAD algorithm achieves robustness in various driving situations.

Some Notational Problems of the translation of Japanese stops[k, t] and affricates[t s ,$t{\int}$] into Korean (일본어 파열음[k, t]과 파찰음[t s , $t{\int}$ 의 국어 표기상의 문제점)

  • Lee, Young-Hee
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.187-192
    • /
    • 2007
  • The purpose of this paper is to show that the current notation of Japanese proper names in Korean has some problems. It cannot represent the different sounds between the voiced and voiceless. The purpose of this paper is also to give a more correct notation which is coherent and efficient. After introducing some general knowledge about the phonemes of Japanese language, I measured the Voice Onset Time of the stops[k, t] at the beginning, in the middle and at the end of a word, and compared the spectrogram of affricates with that of fricatives. In conclusion, Japanese voiceless [k, t ,$t{\int}$] should be written as [ㅋ,ㅌ,ㅊ] and voiced [g, d $d_3$] as [ㄱ,ㄷ,ㅈ] and the affricate[ts] as[ㅊ] in Korean.

  • PDF