• Title/Summary/Keyword: Gaze estimation

Search Result 53, Processing Time 0.029 seconds

Evaluation of Gaze Depth Estimation using a Wearable Binocular Eye tracker and Machine Learning (착용형 양안 시선추적기와 기계학습을 이용한 시선 초점 거리 추정방법 평가)

  • Shin, Choonsung;Lee, Gun;Kim, Youngmin;Hong, Jisoo;Hong, Sung-Hee;Kang, Hoonjong;Lee, Youngho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.1
    • /
    • pp.19-26
    • /
    • 2018
  • In this paper, we propose a gaze depth estimation method based on a binocular eye tracker for virtual reality and augmented reality applications. The proposed gaze depth estimation method collects a wide range information of each eye from the eye tracker such as the pupil center, gaze direction, inter pupil distance. It then builds gaze estimation models using Multilayer perceptron which infers gaze depth with respect to the eye tracking information. Finally, we evaluated the gaze depth estimation method with 13 participants in two ways: the performance based on their individual models and the performance based on the generalized model. Through the evaluation, we found that the proposed estimation method recognized gaze depth with 90.1% accuracy for 13 individual participants and with 89.7% accuracy for including all participants.

Non-intrusive Calibration for User Interaction based Gaze Estimation (사용자 상호작용 기반의 시선 검출을 위한 비강압식 캘리브레이션)

  • Lee, Tae-Gyun;Yoo, Jang-Hee
    • Journal of Software Assessment and Valuation
    • /
    • v.16 no.1
    • /
    • pp.45-53
    • /
    • 2020
  • In this paper, we describe a new method for acquiring calibration data using a user interaction process, which occurs continuously during web browsing in gaze estimation, and for performing calibration naturally while estimating the user's gaze. The proposed non-intrusive calibration is a tuning process over the pre-trained gaze estimation model to adapt to a new user using the obtained data. To achieve this, a generalized CNN model for estimating gaze is trained, then the non-intrusive calibration is employed to adapt quickly to new users through online learning. In experiments, the gaze estimation model is calibrated with a combination of various user interactions to compare the performance, and improved accuracy is achieved compared to existing methods.

Deep Learning-based Gaze Direction Vector Estimation Network Integrated with Eye Landmark Localization (딥 러닝 기반의 눈 랜드마크 위치 검출이 통합된 시선 방향 벡터 추정 네트워크)

  • Joo, Heeyoung;Ko, Min-Soo;Song, Hyok
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.748-757
    • /
    • 2021
  • In this paper, we propose a gaze estimation network in which eye landmark position detection and gaze direction vector estimation are integrated into one deep learning network. The proposed network uses the Stacked Hourglass Network as a backbone structure and is largely composed of three parts: a landmark detector, a feature map extractor, and a gaze direction estimator. The landmark detector estimates the coordinates of 50 eye landmarks, and the feature map extractor generates a feature map of the eye image for estimating the gaze direction. And the gaze direction estimator estimates the final gaze direction vector by combining each output result. The proposed network was trained using virtual synthetic eye images and landmark coordinate data generated through the UnityEyes dataset, and the MPIIGaze dataset consisting of real human eye images was used for performance evaluation. Through the experiment, the gaze estimation error showed a performance of 3.9, and the estimation speed of the network was 42 FPS (Frames per second).

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

Webcam-Based 2D Eye Gaze Estimation System By Means of Binary Deformable Eyeball Templates

  • Kim, Jin-Woo
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.5
    • /
    • pp.575-580
    • /
    • 2010
  • Eye gaze as a form of input was primarily developed for users who are unable to use usual interaction devices such as keyboard and the mouse; however, with the increasing accuracy in eye gaze detection with decreasing cost of development, it tends to be a practical interaction method for able-bodied users in soon future as well. This paper explores a low-cost, robust, rotation and illumination independent eye gaze system for gaze enhanced user interfaces. We introduce two brand-new algorithms for fast and sub-pixel precise pupil center detection and 2D Eye Gaze estimation by means of deformable template matching methodology. In this paper, we propose a new algorithm based on the deformable angular integral search algorithm based on minimum intensity value to localize eyeball (iris outer boundary) in gray scale eye region images. Basically, it finds the center of the pupil in order to use it in our second proposed algorithm which is about 2D eye gaze tracking. First, we detect the eye regions by means of Intel OpenCV AdaBoost Haar cascade classifiers and assign the approximate size of eyeball depending on the eye region size. Secondly, using DAISMI (Deformable Angular Integral Search by Minimum Intensity) algorithm, pupil center is detected. Then, by using the percentage of black pixels over eyeball circle area, we convert the image into binary (Black and white color) for being used in the next part: DTBGE (Deformable Template based 2D Gaze Estimation) algorithm. Finally, using DTBGE algorithm, initial pupil center coordinates are assigned and DTBGE creates new pupil center coordinates and estimates the final gaze directions and eyeball size. We have performed extensive experiments and achieved very encouraging results. Finally, we discuss the effectiveness of the proposed method through several experimental results.

Facial Gaze Detection by Estimating Three Dimensional Positional Movements (얼굴의 3차원 위치 및 움직임 추정에 의한 시선 위치 추적)

  • Park, Gang-Ryeong;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.23-35
    • /
    • 2002
  • Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) his face to gaze at a different position on the monitor. To detect the gaze position, we locate facial region and facial features(both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by camera calibration and parameter estimation algorithm. Then, when a user moves(rotates and/or translates) his face in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D rotation and translation estimation and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. As experimental results, we can obtain the gaze position on a monitor(19inches) and the gaze position accuracy between the computed positions and the real ones is about 2.01 inches of RMS error.

Gaze Direction Estimation Method Using Support Vector Machines (SVMs) (Support Vector Machines을 이용한 시선 방향 추정방법)

  • Liu, Jing;Woo, Kyung-Haeng;Choi, Won-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.379-384
    • /
    • 2009
  • A human gaze detection and tracing method is importantly required for HMI(Human-Machine-Interface) like a Human-Serving robot. This paper proposed a novel three-dimension (3D) human gaze estimation method by using a face recognition, an orientation estimation and SVMs (Support Vector Machines). 2,400 images with the pan orientation range of $-90^{\circ}{\sim}90^{\circ}$ and tilt range of $-40^{\circ}{\sim}70^{\circ}$ with intervals unit of $10^{\circ}$ were used. A stereo camera was used to obtain the global coordinate of the center point between eyes and Gabor filter banks of horizontal and vertical orientation with 4 scales were used to extract the facial features. The experiment result shows that the error rate of proposed method is much improved than Liddell's.

Human Activity Recognition using Model-based Gaze Direction Estimation (모델 기반의 시선 방향 추정을 이용한 사람 행동 인식)

  • Jung, Do-Joon;Yoon, Jeong-Oh
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.4
    • /
    • pp.9-18
    • /
    • 2011
  • In this paper, we propose a method which recognizes human activity using model-based gaze direction estimation in an indoor environment. The method consists of two steps. First, we detect a head region and estimate its gaze direction as prior information in the human activity recognition. We use color and shape information for the detection of head region and use Bayesian Network model representing relationships between a head and a face for the estimation of gaze direction. Second, we recognize event and scenario describing the human activity. We use change of human state for the event recognition and use a rule-based method with combination of events and some constraints. We define 4 types of scenarios related to the gaze direction. We show performance of the gaze direction estimation and human activity recognition with results of experiments.

Real Time Eye and Gaze Tracking (트래킹 Gaze와 실시간 Eye)

  • Min Jin-Kyoung;Cho Hyeon-Seob
    • Proceedings of the KAIS Fall Conference
    • /
    • 2004.11a
    • /
    • pp.234-239
    • /
    • 2004
  • This paper describes preliminary results we have obtained in developing a computer vision system based on active IR illumination for real time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process fur each person, our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using the Generalized Regression Neural Networks (GRNN). With GRNN, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. The effectiveness of our gaze tracker is demonstrated by preliminary experiments that involve gaze-contingent interactive graphic display.

  • PDF

Real Time Gaze Discrimination for Human Computer Interaction (휴먼 컴퓨터 인터페이스를 위한 실시간 시선 식별)

  • Park Ho sik;Bae Cheol soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.3C
    • /
    • pp.125-132
    • /
    • 2005
  • This paper describes a computer vision system based on active IR illumination for real-time gaze discrimination system. Unlike most of the existing gaze discrimination techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person, our gaze discrimination system can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using generalized regression neural networks (GRNNs). With GRNNs, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Futhermore, the mapping function can generalize to other individuals not used in the training. To further improve the gaze estimation accuracy, we employ a reclassification scheme that deals with the classes that tend to be misclassified. This leads to a 10% improvement in classification error. The angular gaze accuracy is about 5°horizontally and 8°vertically. The effectiveness of our gaze tracker is demonstrated by experiments that involve gaze-contingent interactive graphic display.