Artificial intelligence wearable platform that supports the life cycle of the visually impaired

시각장애인의 라이프 사이클을 지원하는 인공지능 웨어러블 플랫폼

  • 박시웅 (한국전자통신연구원 호남권연구센터) ;
  • 김정은 (한국전자통신연구원 호남권연구센터) ;
  • 강현서 (한국전자통신연구원 호남권연구센터) ;
  • 박형준 (한국전자통신연구원 호남권연구센터)
  • Received : 2020.12.03
  • Accepted : 2020.12.29
  • Published : 2020.12.30

Abstract

In this paper, a voice, object, and optical character recognition platform including voice recognition-based smart wearable devices, smart devices, and web AI servers was proposed as an appropriate technology to help the visually impaired to live independently by learning the life cycle of the visually impaired in advance. The wearable device for the visually impaired was designed and manufactured with a reverse neckband structure to increase the convenience of wearing and the efficiency of object recognition. And the high-sensitivity small microphone and speaker attached to the wearable device was configured to support the voice recognition interface function consisting of the app of the smart device linked to the wearable device. From experimental results, the voice, object, and optical character recognition service used open source and Google APIs in the web AI server, and it was confirmed that the accuracy of voice, object and optical character recognition of the service platform achieved an average of 90% or more.

본 논문에서는 시각장애인의 라이프 사이클을 사전에 학습하여 시각장애인의 자립생활을 돕는 적정기술로 음성인식 기반 스마트 웨어러블 디바이스, 스마트 기기 및 웹 AI서버를 포함하는 음성, 사물 및 문자 인식 플랫폼을 제안하였다. 시각장애인용 웨어러블 기기는 착용편의성과 사물인식기능 효율을 높이기 위해 리버스 넥밴드 구조로 설계하여 제작하였으며, 웨어러블 기기에 부착된 고감도 소형 마이크와 스피커는 웨어러블 기기와 연동된 스마트기기의 앱으로 구성된 음성인식 인터페이스 기능을 지원하도록 구성하였다. 음성, 사물 및 광학문자 인식 서비스는 웹 AI 서버에서 오픈소스 및 구글 API를 활용하였고, 서비스 플랫폼의 음성, 사물 및 광학문자 인식 정밀도는 실험을 통하여 평균 90%이상 달성하였음을 확인하였다.

Keywords

Acknowledgement

이 논문은 2020 년도 정부(과학기술정보통신부)의 재원으로 정보통신기획평가원의 지원을 받아 수행된 연구임 (No.1711117076, 시각장애인의 독립보행과 자립생활을 위한 음성인식 인터페이스 기반의 웨어러블 디바이스 개발)

References

  1. World Health Organization (WHO), "World report on vision", Aug. 2019. Available: https://www.who.int/publications/i/item/world-report-on-vision
  2. Bourne, R., Flaxman, S. R., Braithwaite, T., Cicinelli, M. V., Das, A., Jonas, J. B., Keeffe, J., Kempen, J. H., Leasher, J., Limburg, H., Naidoo, K., Pesudovs, K., Resnikoff, S., Silvester, A., Stevens, G. A., Tahhan, N., Wong, T. Y., Taylor, H. R., Vision Loss Expert Group, "Magnitude, temporal trends, and projections of the global prevalence of blindness and distance and near vision impairment: a systematic review and meta-analysis," The Lancet Global Health, Vol. 5, No. 9, pp. e888-e897, Sep. 2017. https://doi.org/10.1016/S2214-109X(17)30293-0
  3. The Ministry of health and welfare, "2020 Disabled people's life in statistics," Jul. 2020. Available: http://www.mohw.go.kr/react/al/sal0301vw.jsp?PAR_MENU_ID=04&MENU_ID=0403&page=1&CONT_SEQ=356153
  4. M. M. Islam, M. Sheikh Sadi, K. Z. Zamli and M. M. Ahmed, "Developing walking assistants for visually impaired people: A review," IEEE Sensors Journal, Vol. 19, No. 8, pp. 2814-2828, Apr. 2019. https://doi.org/10.1109/jsen.2018.2890423
  5. K. Yang, K. Wang, L. M. Bergasa, E. Romera, W. Hu, D. Sun, J. Sun, R. Cheng, T. Chen, E. Lopez, "Unifying terrain awareness for the visually impaired through real-time semantic segmentation," Sensors, Vol. 18, No. 5, pp. 1506, May 2018. https://doi.org/10.3390/s18051506
  6. Sh. Al-khalifa, M. Al-Razgan, "Ebsar: Indoor guidance for the visually impaired," Computer and Electrical Engineering, Vol. 54, pp.26-39, Aug. 2016. https://doi.org/10.1016/j.compeleceng.2016.07.015
  7. A. Bhowmick, S. M. Hazarika, "An insight into assistive technology for the visually impaired and blind people: state-of-the-art and future trends," Journal on Multimodal User Interfaces, Vol. 11, pp. 149-172, Jan. 2017. https://doi.org/10.1007/s12193-016-0235-6
  8. S. -Y. Bea, "Research trends on related to artificial intelligence for the visually impaired: focused on domestic and foreign research in 1993-2020," Journal of the Korea Contents Association, Vol. 20, No. 10, pp. 688-701, Oct. 2020. https://doi.org/10.5392/JKCA.2020.20.10.688
  9. S. Park, T. Jeon, S. Kim, S. Lee, J. Kim, "Deep learning based symbol recognition for the visually impaired," Jorunal of Korea Institute of Information, Electronics, and Communication Technology, Vol. 9, No. 3, Jun. 2016.
  10. J. Oh, C. Bong, J. Kim, "Design of immersive walking interaction using deep learning for virtual reality experience environment of visually impaired people," Journal of Korea Computer Graphics Society, Vol. 25, No. 3, pp. 11-20, Jul. 2019.
  11. H. W. Seol, C. Poleak, J. W. Kwon, "Deep learning based character-oriented image captioning method for visually impaired," Journal of Rehabilitation Welfare Engineering & Assistive Technology, Vol. 13, No. 2, pp. 143-149, May 2019. https://doi.org/10.21288/resko.2019.13.2.143
  12. T. Jeon, S. Lee, "Deep learning based sign detection and recognition for the blind," Journal of the Institute of Electronics and Information Engineers, Vol. 54, No. 2, Feb. 2017.
  13. S.-H. Lee, M.-S. Kang, "Implementation of object detection and voice guidance system for the visually handicapped using object recognition technology," Journal of the Institute of Electronics and Information Engineers, Vol. 55, No. 11, pp. 65-71, Nov. 2018. https://doi.org/10.5573/ieie.2018.55.11.65
  14. C.-J. Kim, M.-S. Park, M.-H. Kim, "A development architecture research of intelligent object recognition and voice service for the visually impaired," Journal of Knowledge Information Technology and Systems, Vol. 13, No. 4, pp. 441-450, Aug. 2018. https://doi.org/10.34163/jkits.2018.13.4.004