• Title/Summary/Keyword: Optical Pose Tracking

Search Result 15, Processing Time 0.019 seconds

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

Design and Implementation of Real-Time Helmet Pose Tracking System (실시간 헬멧자세 추적시스템의 설계 및 구현)

  • Hwang, Sang-Hyun;Chung, Chul-Ju;Kim, Dong-Sung
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.44 no.2
    • /
    • pp.123-130
    • /
    • 2016
  • This paper describes the design and implementation scheme of HTS(Helmet Tracking System) providing coincident LOS(Line of Sight) between aircraft and HMD(Helmet Mounted Display) which displays flight and mission information on Pilot helmet. The functionality and performance of HMD system depends on the performance of helmet tracking system. The target of HTS system design is to meet real-time performance and reliability by predicting non-periodic latency and high accuracy performance. To prove an availability of a proposed approach, a robust hybrid scheme with a fusion optical and inertial tracking system are tested through a implemented test-bed. Experimental results show real-time and reliable tracking control in spite of external errors.

An Image-based Augmented Reality System for Multiple Users using Multiple Markers (다수 마커를 활용한 영상 기반 다중 사용자 증강현실 시스템)

  • Moon, Ji won;Park, Dong woo;Jung, Hyun suk;Kim, Young hun;Hwang, Sung Soo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.10
    • /
    • pp.1162-1170
    • /
    • 2018
  • This paper presents an augmented reality system for multiple users. The proposed system performs ar image-based pose estimation of users and pose of each user is shared with other uses via a network server. For camera-based pose estimation, we install multiple markers in a pre-determined space and select the marker with the best appearance. The marker is detected by corner point detection and for robust pose estimation. the marker's corner points are tracked by optical flow tracking algorithm. Experimental results show that the proposed system successfully provides an augmented reality application to multiple users even when users are rapidly moving and some of markers are occluded by users.

Kalman Filter Baded Pose Data Fusion with Optical Traking System and Inertial Navigation System Networks for Image Guided Surgery (영상유도수술을 위한 광학추적 센서 및 관성항법 센서 네트웍의 칼만필터 기반 자세정보 융합)

  • Oh, Hyun Min;Kim, Min Young
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.1
    • /
    • pp.121-126
    • /
    • 2017
  • Tracking system is essential for Image Guided Surgery(IGS). Optical Tracking System(OTS) is widely used to IGS for its high accuracy and easy usage. However, OTS doesn't work when occlusion of marker occurs. In this paper sensor data fusion with OTS and Inertial Navigation System(INS) is proposed to solve this problem. The proposed system improves the accuracy of tracking system by eliminating gaussian error of the sensor and supplements the disadvantages of OTS and IMU through sensor fusion based on Kalman filter. Also, sensor calibration method that improves the accuracy is introduced. The performed experiment verifies the effectualness of the proposed algorithm.

Design and Evaluation of Intelligent Helmet Display System (지능형 헬멧시현시스템 설계 및 시험평가)

  • Hwang, Sang-Hyun
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.5
    • /
    • pp.417-428
    • /
    • 2017
  • In this paper, we describe the architectural design, unit component hardware design and core software design(Helmet Pose Tracking Software and Terrain Elevation Data Correction Software) of IHDS(Intelligent Helmet Display System), and describe the results of unit test and integration test. According to the trend of the latest helmet display system, the specifications which includes 3D map display, FLIR(Forward Looking Infra-Red) display, hybrid helmet pose tracking, visor reflection type of binocular optical system, NVC(Night Vision Camera) display, lightweight composite helmet shell were applied to the design. Especially, we proposed unique design concepts such as the automatic correction of altitude error of 3D map data, high precision image registration, multi-color lighting optical system, transmissive image emitting surface using diffraction optical element, tracking camera minimizing latency time of helmet pose estimation and air pockets for helmet fixation on head. After completing the prototype of all system components, unit tests and system integration tests were performed to verify the functions and performance.

Multi-Object Tracking Based on Keypoints Using Homography in Mobile Environments (모바일 환경 Homography를 이용한 특징점 기반 다중 객체 추적)

  • Han, Woo ri;Kim, Young-Seop;Lee, Yong-Hwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.14 no.3
    • /
    • pp.67-72
    • /
    • 2015
  • This paper proposes an object tracking system based on keypoints using homography in mobile environments. The proposed system is based on markerless tracking, and there are four modules which are recognition, tracking, detecting and learning module. Recognition module detects and identifies an object to be matched on current frame correspond to the database using LSH through SURF, and then this module generates a standard object information. Tracking module tracks an object using homography information that generate by being matched on the learned object keypoints to the current object keypoints. Then update the window included the object for defining object's pose. Detecting module finds out the object based on having the best possible knowledge available among the learned objects information, when the system fails to track. The experimental results show that the proposed system is able to recognize and track objects with updating object's pose for the use of mobile platform.

Efficient Object Tracking System Using the Fusion of a CCD Camera and an Infrared Camera (CCD카메라와 적외선 카메라의 융합을 통한 효과적인 객체 추적 시스템)

  • Kim, Seung-Hun;Jung, Il-Kyun;Park, Chang-Woo;Hwang, Jung-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.229-235
    • /
    • 2011
  • To make a robust object tracking and identifying system for an intelligent robot and/or home system, heterogeneous sensor fusion between visible ray system and infrared ray system is proposed. The proposed system separates the object by combining the ROI (Region of Interest) estimated from two different images based on a heterogeneous sensor that consolidates the ordinary CCD camera and the IR (Infrared) camera. Human's body and face are detected in both images by using different algorithms, such as histogram, optical-flow, skin-color model and Haar model. Also the pose of human body is estimated from the result of body detection in IR image by using PCA algorithm along with AdaBoost algorithm. Then, the results from each detection algorithm are fused to extract the best detection result. To verify the heterogeneous sensor fusion system, few experiments were done in various environments. From the experimental results, the system seems to have good tracking and identification performance regardless of the environmental changes. The application area of the proposed system is not limited to robot or home system but the surveillance system and military system.

Calibration for a Planar Cable-Driven Parallel Robot (평면형 병렬 케이블 구동 로봇에 대한 형상보정)

  • Jin, Xuejun;Jung, Jinwoo;Jun, Jong Pyo;Park, Sukho;Park, Jong-Oh;Ko, Seong Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.11
    • /
    • pp.1070-1075
    • /
    • 2015
  • This paper proposes a calibration algorithm for a three-degree-of-freedom (DOF) planar cable-driven parallel robot (CDPR). To evaluate the proposed algorithm, we calibrated winches and an optical tracking sensor, measured the end-effector pose using the optical tracking sensor, and calculated the accurate robot configuration using the measurement information. To conduct an accuracy test on the end-effector pose, we followed guidelines from "Manipulating industrial robots - Performance criteria and related test methods." Through the test, it is verified that the position accuracy can be improved by up to 20% for a $2m{\times}2m$-sized planar cable robot using the proposed calibration algorithm.

Development of a Cost-Effective Tele-Robot System Delivering Speaker's Affirmative and Negative Intentions (화자의 긍정·부정 의도를 전달하는 실용적 텔레프레즌스 로봇 시스템의 개발)

  • Jin, Yong-Kyu;You, Su-Jeong;Cho, Hye-Kyung
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.3
    • /
    • pp.171-177
    • /
    • 2015
  • A telerobot offers a more engaging and enjoyable interaction with people at a distance by communicating via audio, video, expressive gestures, body pose and proxemics. To provide its potential benefits at a reasonable cost, this paper presents a telepresence robot system for video communication which can deliver speaker's head motion through its display stanchion. Head gestures such as nodding and head-shaking can give crucial information during conversation. We also can assume a speaker's eye-gaze, which is known as one of the key non-verbal signals for interaction, from his/her head pose. In order to develop an efficient head tracking method, a 3D cylinder-like head model is employed and the Harris corner detector is combined with the Lucas-Kanade optical flow that is known to be suitable for extracting 3D motion information of the model. Especially, a skin color-based face detection algorithm is proposed to achieve robust performance upon variant directions while maintaining reasonable computational cost. The performance of the proposed head tracking algorithm is verified through the experiments using BU's standard data sets. A design of robot platform is also described as well as the design of supporting systems such as video transmission and robot control interfaces.

A Hybrid Approach of Efficient Facial Feature Detection and Tracking for Real-time Face Direction Estimation (실시간 얼굴 방향성 추정을 위한 효율적인 얼굴 특성 검출과 추적의 결합방법)

  • Kim, Woonggi;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.117-124
    • /
    • 2013
  • In this paper, we present a new method which efficiently estimates a face direction from a sequences of input video images in real time fashion. For this work, the proposed method performs detecting the facial region and major facial features such as both eyes, nose and mouth by using the Haar-like feature, which is relatively not sensitive against light variation, from the detected facial area. Then, it becomes able to track the feature points from every frame using optical flow in real time fashion, and determine the direction of the face based on the feature points tracked. Further, in order to prevent the erroneously recognizing the false positions of the facial features when if the coordinates of the features are lost during the tracking by using optical flow, the proposed method determines the validity of locations of the facial features using the template matching of detected facial features in real time. Depending on the correlation rate of re-considering the detection of the features by the template matching, the face direction estimation process is divided into detecting the facial features again or tracking features while determining the direction of the face. The template matching initially saves the location information of 4 facial features such as the left and right eye, the end of nose and mouse in facial feature detection phase and reevaluated these information when the similarity measure between the stored information and the traced facial information by optical flow is exceed a certain level of threshold by detecting the new facial features from the input image. The proposed approach automatically combines the phase of detecting facial features and the phase of tracking features reciprocally and enables to estimate face pose stably in a real-time fashion. From the experiment, we can prove that the proposed method efficiently estimates face direction.