DOI QR코드

DOI QR Code

CNN기반의 온라인 수어통역 상담 시스템에 관한 연구

CNN-based Online Sign Language Translation Counseling System

  • 박원철 (공주대학교 컴퓨터공학과) ;
  • 박구락 (공주대학교 컴퓨터공학부)
  • Park, Won-Cheol (Division of Computer Engineering, Kongju National University) ;
  • Park, Koo-Rack (Division of Computer Science & Engineering, Kongju National University)
  • 투고 : 2021.03.29
  • 심사 : 2021.05.20
  • 발행 : 2021.05.28

초록

청각장애인들은 수어통역 없이 상담서비스를 이용하기에는 어려움이 있다. 수어 통역사 인력이 많이 부족하여 수어 통역사까지 상담이 연결되는데 많은 시간이 걸리거나 연결이 되지 않는 경우가 많이 발생하고 있다. 이에 본 논문에서는 OpenCV와 CNN(Convolutional Neural Network)을 이용하여 수어를 영상으로 촬영하고 수어 동작을 인식하여 수어가 뜻하는 의미를 텍스트 형태의 데이터로 변환하여 사용자에게 제공하는 시스템을 제안한다. 상담사는 저장된 수어번역 상담내용을 열람하여 상담을 진행할 수 있다. 전문 수어 통역사가 없어도 상담이 가능하여 수어 통역사를 기다려야 하는 부담을 줄일 수 있다. 제안 시스템을 청각장애인 상담서비스에 적용할 경우 상담 효과의 향상과 향후 청각장애인 상담에 관한 학문적 연구를 촉진하는 계기가 될 것으로 기대된다.

It is difficult for the hearing impaired to use the counseling service without sign language interpretation. Due to the shortage of sign language interpreters, it takes a lot of time to connect to sign language interpreters, or there are many cases where the connection is not available. Therefore, in this paper, we propose a system that captures sign language as an image using OpenCV and CNN (Convolutional Neural Network), recognizes sign language motion, and converts the meaning of sign language into textual data and provides it to users. The counselor can conduct counseling by reading the stored sign language translation counseling contents. Consultation is possible without a professional sign language interpreter, reducing the burden of waiting for a sign language interpreter. If the proposed system is applied to counseling services for the hearing impaired, it is expected to improve the effectiveness of counseling and promote academic research on counseling for the hearing impaired in the future.

키워드

참고문헌

  1. H. E. Kim, H. J. Koh, D. I. Kim & S. M. Choi. (2019). Research Trends and Support Measures in Counselling Service for Children and Adolescent with Disabilities in Korea. Asian Journal of Education, 20(3), 831-852. DOI : 10.15753/aje.2019.09.20.3.831
  2. Y. I. Cho, B. C. Yoon & U. J. Min. (2012). A Qualitative Study for a Educational Sign Language Interpretation Service of Deaf Students at G High School. The Journal of Special Children Education, 14(3), 237-255. DOI : 10.21075/kacsn.2012.14.3.237
  3. J. R. Park. (2010). A qualitative study on the Deaf for Meaning of the Sign Language Experience and Sign Language interpretation services Experience based on narrative inquiry. Social Science Research Review, 26(4), 93-122.
  4. S. M. Koo, I. G. Jang & Y. S. Son. (2018). An Open Source Hardware based Sign Language Interpreter Glove & Situation Awareness Auxiliary IoT Device for the Hearing Impaired. KIISE Transactions on Computing Practices, 24(4), 204-209. DOI : 10.5626/KTCP.2018.24.4.204
  5. D. Rempel, M. J. Camilleri & D. L. Lee. (2017). The design of hand gestures for human-computer interaction: Lessons from sign language interpreters. International Journal of Human-Computer Studies, 72(10-11), 728-735. DOI : 10.1016/j.ijhcs.2014.05.003
  6. R. Sigit & D. R. Kartika. (2016). 3D Sign language translator using optical flow. International Electronics Symposium(IES), 262-266. DOI : 10.1109/elecsym.2016.7861014
  7. G. T. Lee, T. H. Kim, J. H. Cho & M. K. Moon. (2017). Development of Parking Management System Using OpenCV for Handicapped Person. Proceedings of the HCI Society of Korea Conference, 1187-1189.
  8. S. H. Lee & H. M. Ahn. (2012). Piano practice using OpenCV and the Android application project. Proceedings of the Korean Society of Computer Information Conference, 20(2), 267-268.
  9. J. H. Kim, S. Y. Shim, B. J. Oh, J. A. Lee, H. W. Choi & W. O. Cha. (2006). Implementation of an educational interactive game using OpenCV and color recognition. Proceedings of the Korea Multimedia Society Conference, 30-34.
  10. Y. T. Baek, S. H. Lee & J. S. Kim. (2017). Intelligent missing persons index system Implementation based on the OpenCV image processing and TensorFlow Deep-running Image Processing. Journal of the Korea Society of Computer and Information, 22(1), 15-21. DOI : 10.9708/jksci.2017.22.01.015
  11. G. H. Kim, K. S. Chong & J. H. Youn. (2013). Automatic Recognition of Direction Information in Road Sign Image Using OpenCV. Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography, 31(4), 293-300. DOI : 10.7848/ksgpc.2013.31.4.293
  12. K. C. Kim & C. Yu. (2017). Flower Recognition System Using OpenCV on Android Platform. Journal of the Korea Institute of Information and Communication Engineering, 21(1), 123-129. DOI : 10.6109/jkiice.2017.21.1.123
  13. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. nature, 521(7553), 436-444. DOI : 10.1038/nature14539
  14. T. Bluche, H. Ney & C. Kermorvant. (2013). Feature extraction with convolutional neural networks for handwritten word recognition. In 2013 12th International Conference on Document Analysis and Recognition. IEEE, 285-289. DOI : 10.1109/ICDAR.2013.64
  15. N. K. Lee, J. Y. Kim & J. H. Shim. (2021). Empirical Study on Analyzing Training Data for CNN-based Product Classification Deep Learning Model. The Journal of Society for e-Business Studies, 26(1), 107-126. DOI : 10.7838/jsebs.2021.26.1.107