DOI QR코드

DOI QR Code

Performance Comparison of Machine Learning Models to Detect Screen Use and Devices

스크린 사용 여부 및 사용 디바이스 감지를 위한 머신러닝 모델 성능 비교

  • Hwang, Sangwon (Department of Computer Engineering, Graduate School, KOREATECH) ;
  • Kim, Dongwoo (Department of Computer Engineering, Graduate School, KOREATECH) ;
  • Lee, Juhwan (Department of Computer Engineering, Graduate School, KOREATECH) ;
  • Kang, Seungwoo (School of Computer Science and Engineering, KOREATECH)
  • Received : 2020.01.29
  • Accepted : 2020.03.05
  • Published : 2020.05.31

Abstract

Long-term use of digital screens in daily life can lead to computer vision syndrome including symptoms such as eye strain, dry eyes, and headaches. To prevent computer vision syndrome, it is important to limit screen usage time and take frequent breaks. There are a variety of applications that can help users know the screen usage time. However, these apps are limited because users see various screens such as desktops, laptops, and tablets as well as smartphone screens. In this paper, we propose and evaluate machine learning-based models that detect the screen device in use using color, IMU and lidar sensor data. Our evaluation shows that neural network-based models show relatively high F1 scores compared to traditional machine learning models. Among neural network-based models, the MLP and CNN-based models have higher scores than the LSTM-based model. The RF model shows the best result among the traditional machine learning models, followed by the SVM model.

일상생활에서 디지털 스크린을 오랜 시간 사용하면 눈의 피로, 안구 건조, 두통 등 컴퓨터 시각 증후군을 경험하게 된다. 컴퓨터 시각 증후군을 예방하기 위해서는 스크린 사용 시간을 제한하고 수시로 휴식을 취하는 것이 중요하다. 최근 스마트폰에서는 스크린 사용 시간을 알 수 있도록 도와주는 다양한 애플리케이션이 존재한다. 하지만, 사용자는 스마트폰 스크린뿐만 아니라 데스크탑, 노트북, 태블릿 등 다양한 스크린을 보기 때문에 이러한 앱만으로는 한계가 있다. 본 논문에서는 color, IMU, lidar 센서 데이터를 이용하여, 사용 중인 스크린 디바이스를 감지하는 머신 러닝 기반 모델을 제안하고 여러 가지 모델의 성능을 비교한다. 성능 비교 결과 신경망 기반 모델이 전통적인 머신 러닝 모델보다 높은 F1 스코어를 보였다. 신경망 기반 모델에서는 MLP, CNN 기반 모델이 LSTM 기반 모델보다 높은 스코어를 보였으며, 전통적인 머신 러닝 모델에서는 RF 모델이 가장 우수했으며, 다음으로는 SVM 모델이었다.

Keywords

References

  1. Computer Vision Syndrome [Internet]. Available: https://www.aoa.org/patients-and-public/caring-for-your-vision/protecting-your-vision/computer-vision-syndrome?sso=y.
  2. The 20-20-20 Rule [Internet]. Available: https://opto.ca/health-library/the-20-20-20-rule.
  3. Screen Time [Internet]. Available: https://support.apple.com/ko-kr/HT208982.
  4. Digital Wellbeing [Internet]. Available: https://www.android.com/digital-wellbeing/.
  5. Y. C. Zhang and J. M. Rehg, "Watching the TV Watchers," Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 2, no. 2, Article 88, Jun. 2018.
  6. F. Wahl, J. Kasbauer, and O. Amft, "Computer Screen Use Detection Using Smart Eyeglasses," Frontiers in ICT, 4:8, May 2017. https://doi.org/10.3389/fict.2017.00008
  7. C. Min, E. Lee, S. Park, and S. Kang, "Tiger: Wearable Glasses for the 20-20-20 Rule to Alleviate Computer Vision Syndrome," in Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, Oct. 2019.
  8. T. Okita and S. Inoue, "Recognition of multiple overlapping activities using compositional CNN-LSTM model," in Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers, pp. 165-168, Sep. 2017.
  9. Y. Yuki, J. Nozaki, K. Hiroi, K. Kaji, and N. Kawaguchi, "Activity Recognition using Dual-ConvLSTM Extracting Local and Global Features for SHL Recognition Challenge," in Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, pp. 1643-1651, Oct. 2018.
  10. L. Peng, L. Chen, Z. Ye, and Y. Zhang, "AROMA: A Deep Multi-Task Learning Based Simple and Complex Human Activity Recognition Method Using Wearable Sensors," Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 2, no. 2, Article 74, Jun. 2018.
  11. K. He, X. Zhang, S. Ren, and J. Sun, "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification," in Proceedings of the IEEE International Conference on Computer Vision, pp. 1026-1034, 2015.