• Title/Summary/Keyword: pose tracking

Search Result 155, Processing Time 0.022 seconds

Pose Tracking of Moving Sensor using Monocular Camera and IMU Sensor

  • Jung, Sukwoo;Park, Seho;Lee, KyungTaek
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.3011-3024
    • /
    • 2021
  • Pose estimation of the sensor is important issue in many applications such as robotics, navigation, tracking, and Augmented Reality. This paper proposes visual-inertial integration system appropriate for dynamically moving condition of the sensor. The orientation estimated from Inertial Measurement Unit (IMU) sensor is used to calculate the essential matrix based on the intrinsic parameters of the camera. Using the epipolar geometry, the outliers of the feature point matching are eliminated in the image sequences. The pose of the sensor can be obtained from the feature point matching. The use of IMU sensor can help initially eliminate erroneous point matches in the image of dynamic scene. After the outliers are removed from the feature points, these selected feature points matching relations are used to calculate the precise fundamental matrix. Finally, with the feature point matching relation, the pose of the sensor is estimated. The proposed procedure was implemented and tested, comparing with the existing methods. Experimental results have shown the effectiveness of the technique proposed in this paper.

Design of Face Recognition and Tracking System by Using RBFNNs Pattern Classifier with Object Tracking Algorithm (RBFNNs 패턴분류기와 객체 추적 알고리즘을 이용한 얼굴인식 및 추적 시스템 설계)

  • Oh, Seung-Hun;Oh, Sung-Kwun;Kim, Jin-Yul
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.5
    • /
    • pp.766-778
    • /
    • 2015
  • In this paper, we design a hybrid system for recognition and tracking realized with the aid of polynomial based RBFNNs pattern classifier and particle filter. The RBFNN classifier is built by learning the training data for diverse pose images. The optimized parameters of RBFNN classifier are obtained by Particle Swarm Optimization(PSO). Testing data for pose image is used as a face image obtained under real situation, where the face image is detected by AdaBoost algorithm. In order to improve the recognition performance for a detected image, pose estimation as preprocessing step is carried out before the face recognition step. PCA is used for pose estimation, the pose of detected image is assigned for the built pose by considering the featured difference between the previously built pose image and the newly detected image. The recognition of detected image is performed through polynomial based RBFNN pattern classifier, and if the detected image is equal to target for tracking, the target will be traced by particle filter in real time. Moreover, when tracking is failed by PF, Adaboost algorithm detects facial area again, and the procedures of both the pose estimation and the image recognition are repeated as mentioned above. Finally, experimental results are compared and analyzed by using Honda/UCSD data known as benchmark DB.

An Efficient Camera Calibration Method for Head Pose Tracking (머리의 자세를 추적하기 위한 효율적인 카메라 보정 방법에 관한 연구)

  • Park, Gyeong-Su;Im, Chang-Ju;Lee, Gyeong-Tae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.19 no.1
    • /
    • pp.77-90
    • /
    • 2000
  • The aim of this study is to develop and evaluate an efficient camera calibration method for vision-based head tracking. Tracking head movements is important in the design of an eye-controlled human/computer interface. A vision-based head tracking system was proposed to allow the user's head movements in the design of the eye-controlled human/computer interface. We proposed an efficient camera calibration method to track the 3D position and orientation of the user's head accurately. We also evaluated the performance of the proposed method. The experimental error analysis results showed that the proposed method can provide more accurate and stable pose (i.e. position and orientation) of the camera than the conventional direct linear transformation method which has been used in camera calibration. The results of this study can be applied to the tracking head movements related to the eye-controlled human/computer interface and the virtual reality technology.

  • PDF

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

Mobile Augmented Visualization Technology Using Vive Tracker (포즈 추적 센서를 활용한 모바일 증강 가시화 기술)

  • Lee, Dong-Chun;Kim, Hang-Kee;Lee, Ki-Suk
    • Journal of Korea Game Society
    • /
    • v.21 no.5
    • /
    • pp.41-48
    • /
    • 2021
  • This paper introduces a mobile augmented visualization technology that augments a three-dimensional virtual human body on a mannequin model using two pose(position and rotation) tracking sensors. The conventional camera tracking technology used for augmented visualization has the disadvantage of failing to calculate the camera pose when the camera shakes or moves quickly because it uses the camera image, but using a pose tracking sensor can overcome this disadvantage. Also, even if the position of the mannequin is changed or rotated, augmented visualization is possible using the data of the pose tracking sensor attached to the mannequin, and above all there is no load for camera tracking.

SIFT-Like Pose Tracking with LIDAR using Zero Odometry (이동정보를 배제한 위치추정 알고리즘)

  • Kim, Jee-Soo;Kwak, Nojun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.11
    • /
    • pp.883-887
    • /
    • 2016
  • Navigating an unknown environment is a challenging task for a robot, especially when a large number of obstacles exist and the odometry lacks reliability. Pose tracking allows the robot to determine its location relative to its previous location. The ICP (iterative closest point) has been a powerful method for matching two point clouds and determining the transformation matrix between the maps. However, in a situation where odometry is not available and the robot moves far from its original location, the ICP fails to calculate the exact displacement. In this paper, we suggest a method that is able to match two different point clouds taken a long distance apart. Without using any odometry information, it only exploits the features of corner points containing information on the surroundings. The algorithm is fast enough to run in real time.

Upper Body Tracking Using Hierarchical Sample Propagation Method and Pose Recognition (계층적 샘플 생성 방법을 이용한 상체 추적과 포즈 인식)

  • Cho, Sang-Hyun;Kang, Hang-Bong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.5
    • /
    • pp.63-71
    • /
    • 2008
  • In this paper, we propose a color based hierarchically propagated particle filter that extends the color based particle filter into the articulated upper body tracking. Since color feature is robust to partial occlusion and rotation, the color based particle filter is widely used for object tracking. However, in articulated body tacking, it is not desirable to use the traditional particle filter because the dimension of the state vector usually is high and thus, many samples are required for robust hacking. To overcome this problem, we use a hierarchical tracking method for each body part based on the blown body part. By using a hierarchical tracking method, we can reduce the number of samples for robust tracking in the cluttered environment. Also for human pose recognition, we classify the human pose into eight categories using Support Vector Machine(SVM) according to the angle between upper- arm and fore-arm. Experimental results show that our proposed method is more efficient than the traditional particle filter.

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

Vision-Based Trajectory Tracking Control System for a Quadrotor-Type UAV in Indoor Environment (실내 환경에서의 쿼드로터형 무인 비행체를 위한 비전 기반의 궤적 추종 제어 시스템)

  • Shi, Hyoseok;Park, Hyun;Kim, Heon-Hui;Park, Kwang-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39C no.1
    • /
    • pp.47-59
    • /
    • 2014
  • This paper deals with a vision-based trajectory tracking control system for a quadrotor-type UAV for entertainment purpose in indoor environment. In contrast to outdoor flights that emphasize the autonomy to complete special missions such as aerial photographs and reconnaissance, indoor flights for entertainment require trajectory following and hovering skills especially in precision and stability of performance. This paper proposes a trajectory tracking control system consisting of a motion generation module, a pose estimation module, and a trajectory tracking module. The motion generation module generates a sequence of motions that are specified by 3-D locations at each sampling time. In the pose estimation module, 3-D position and orientation information of a quadrotor is estimated by recognizing a circular ring pattern installed on the vehicle. The trajectory tracking module controls the 3-D position of a quadrotor in real time using the information from the motion generation module and pose estimation module. The proposed system is tested through several experiments in view of one-point, multi-points, and trajectory tracking control.

3D Map Generation System for Indoor Autonomous Navigation (실내 자율 주행을 위한 3D Map 생성 시스템)

  • Moon, SungTae;Han, Sang-Hyuck;Eom, Wesub;Kim, Youn-Kyu
    • Aerospace Engineering and Technology
    • /
    • v.11 no.2
    • /
    • pp.140-148
    • /
    • 2012
  • For autonomous navigation, map, pose tracking, and finding the shortest path are required. Because there is no GPS signal in indoor environment, the current position should be recognized in the 3D map by using image processing or something. In this paper, we explain 3D map creation technology by using depth camera like Kinect and pose tracking in 3D map by using 2D image taking from camera. In addition, the mechanism of avoiding obstacles is discussed.