• Title/Summary/Keyword: pose and motion estimation

Search Result 69, Processing Time 0.034 seconds

2.5D human pose estimation for shadow puppet animation

  • Liu, Shiguang;Hua, Guoguang;Li, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2042-2059
    • /
    • 2019
  • Digital shadow puppet has traditionally relied on expensive motion capture equipments and complex design. In this paper, a low-cost driven technique is presented, that captures human pose estimation data with simple camera from real scenarios, and use them to drive virtual Chinese shadow play in a 2.5D scene. We propose a special method for extracting human pose data for driving virtual Chinese shadow play, which is called 2.5D human pose estimation. Firstly, we use the 3D human pose estimation method to obtain the initial data. In the process of the following transformation, we treat the depth feature as an implicit feature, and map body joints to the range of constraints. We call the obtain pose data as 2.5D pose data. However, the 2.5D pose data can not better control the shadow puppet directly, due to the difference in motion pattern and composition structure between real pose and shadow puppet. To this end, the 2.5D pose data transformation is carried out in the implicit pose mapping space based on self-network and the final 2.5D pose expression data is produced for animating shadow puppets. Experimental results have demonstrated the effectiveness of our new method.

Fine-Motion Estimation Using Ego/Exo-Cameras

  • Uhm, Taeyoung;Ryu, Minsoo;Park, Jong-Il
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.766-771
    • /
    • 2015
  • Robust motion estimation for human-computer interactions played an important role in a novel method of interaction with electronic devices. Existing pose estimation using a monocular camera employs either ego-motion or exo-motion, both of which are not sufficiently accurate for estimating fine motion due to the motion ambiguity of rotation and translation. This paper presents a hybrid vision-based pose estimation method for fine-motion estimation that is specifically capable of extracting human body motion accurately. The method uses an ego-camera attached to a point of interest and exo-cameras located in the immediate surroundings of the point of interest. The exo-cameras can easily track the exact position of the point of interest by triangulation. Once the position is given, the ego-camera can accurately obtain the point of interest's orientation. In this way, any ambiguity between rotation and translation is eliminated and the exact motion of a target point (that is, ego-camera) can then be obtained. The proposed method is expected to provide a practical solution for robustly estimating fine motion in a non-contact manner, such as in interactive games that are designed for special purposes (for example, remote rehabilitation care systems).

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

Camera pose estimation framework for array-structured images

  • Shin, Min-Jung;Park, Woojune;Kim, Jung Hee;Kim, Joonsoo;Yun, Kuk-Jin;Kang, Suk-Ju
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.10-23
    • /
    • 2022
  • Despite the significant progress in camera pose estimation and structure-from-motion reconstruction from unstructured images, methods that exploit a priori information on camera arrangements have been overlooked. Conventional state-of-the-art methods do not exploit the geometric structure to recover accurate camera poses from a set of patch images in an array for mosaic-based imaging that creates a wide field-of-view image by sewing together a collection of regular images. We propose a camera pose estimation framework that exploits the array-structured image settings in each incremental reconstruction step. It consists of the two-way registration, the 3D point outlier elimination and the bundle adjustment with a constraint term for consistent rotation vectors to reduce reprojection errors during optimization. We demonstrate that by using individual images' connected structures at different camera pose estimation steps, we can estimate camera poses more accurately from all structured mosaic-based image sets, including omnidirectional scenes.

Restoring Motion Capture Data for Pose Estimation (자세 추정을 위한 모션 캡처 데이터 복원)

  • Youn, Yeo-su;Park, Hyun-jun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.5-7
    • /
    • 2021
  • Motion capture data files for pose estimation may have inaccurate data depending on the surrounding environment and the degree of movement, so it is necessary to correct it. In the past, inaccurate data was restored with post-processing by people, but recently various kind of neural networks such as LSTM and R-CNN are used as automated method. However, since neural network-based data restoration methods require a lot of computing resource, this paper proposes a method that reduces computing resource and maintains data restoration rate compared to neural network-based method. The proposed method automatically restores inaccurate motion capture data by using posture measurement data (c3d). As a result of the experiment, data restoration rates ranged from 89% to 99% depending on the degree of inaccuracy of the data.

  • PDF

Deep Learning-Based Outlier Detection and Correction for 3D Pose Estimation (3차원 자세 추정을 위한 딥러닝 기반 이상치 검출 및 보정 기법)

  • Ju, Chan-Yang;Park, Ji-Sung;Lee, Dong-Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.10
    • /
    • pp.419-426
    • /
    • 2022
  • In this paper, we propose a method to improve the accuracy of 3D human pose estimation model in various move motions. Existing human pose estimation models have some problems of jitter, inversion, swap, miss that cause miss coordinates when estimating human poses. These problems cause low accuracy of pose estimation models to detect exact coordinates of human poses. We propose a method that consists of detection and correction methods to handle with these problems. Deep learning-based outlier detection method detects outlier of human pose coordinates in move motion effectively and rule-based correction method corrects the outlier according to a simple rule. We have shown that the proposed method is effective in various motions with the experiments using 2D golf swing motion data and have shown the possibility of expansion from 2D to 3D coordinates.

Dynamic Human Pose Tracking using Motion-based Search (모션 기반의 검색을 사용한 동적인 사람 자세 추적)

  • Jung, Do-Joon;Yoon, Jeong-Oh
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.7
    • /
    • pp.2579-2585
    • /
    • 2010
  • This paper proposes a dynamic human pose tracking method using motion-based search strategy from an image sequence obtained from a monocular camera. The proposed method compares the image features between 3D human model projections and real input images. The method repeats the process until predefined criteria and then estimates 3D human pose that generates the best match. When searching for the best matching configuration with respect to the input image, the search region is determined from the estimated 2D image motion and then search is performed randomly for the body configuration conducted within that search region. As the 2D image motion is highly constrained, this significantly reduces the dimensionality of the feasible space. This strategy have two advantages: the motion estimation leads to an efficient allocation of the search space, and the pose estimation method is adaptive to various kinds of motion.

Experimental Study of Spacecraft Pose Estimation Algorithm Using Vision-based Sensor

  • Hyun, Jeonghoon;Eun, Youngho;Park, Sang-Young
    • Journal of Astronomy and Space Sciences
    • /
    • v.35 no.4
    • /
    • pp.263-277
    • /
    • 2018
  • This paper presents a vision-based relative pose estimation algorithm and its validation through both numerical and hardware experiments. The algorithm and the hardware system were simultaneously designed considering actual experimental conditions. Two estimation techniques were utilized to estimate relative pose; one was a nonlinear least square method for initial estimation, and the other was an extended Kalman Filter for subsequent on-line estimation. A measurement model of the vision sensor and equations of motion including nonlinear perturbations were utilized in the estimation process. Numerical simulations were performed and analyzed for both the autonomous docking and formation flying scenarios. A configuration of LED-based beacons was designed to avoid measurement singularity, and its structural information was implemented in the estimation algorithm. The proposed algorithm was verified again in the experimental environment by using the Autonomous Spacecraft Test Environment for Rendezvous In proXimity (ASTERIX) facility. Additionally, a laser distance meter was added to the estimation algorithm to improve the relative position estimation accuracy. Throughout this study, the performance required for autonomous docking could be presented by confirming the change in estimation accuracy with respect to the level of measurement error. In addition, hardware experiments confirmed the effectiveness of the suggested algorithm and its applicability to actual tasks in the real world.

Stabilized 3D Pose Estimation of 3D Volumetric Sequence Using 360° Multi-view Projection (360° 다시점 투영을 이용한 3D 볼류메트릭 시퀀스의 안정적인 3차원 자세 추정)

  • Lee, Sol;Seo, Young-ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.76-77
    • /
    • 2022
  • In this paper, we propose a method to stabilize the 3D pose estimation result of a 3D volumetric data sequence by matching the pose estimation results from multi-view. Draw a circle centered on the volumetric model and project the model from the viewpoint at regular intervals. After performing Openpose 2D pose estimation on the projected 2D image, the 2D joint is matched to localize the 3D joint position. The tremor of 3D joints sequence according to the angular spacing was quantified and expressed in graphs, and the minimum conditions for stable results are suggested.

  • PDF

Behavior Pattern Prediction Algorithm Based on 2D Pose Estimation and LSTM from Videos (비디오 영상에서 2차원 자세 추정과 LSTM 기반의 행동 패턴 예측 알고리즘)

  • Choi, Jiho;Hwang, Gyutae;Lee, Sang Jun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.4
    • /
    • pp.191-197
    • /
    • 2022
  • This study proposes an image-based Pose Intention Network (PIN) algorithm for rehabilitation via patients' intentions. The purpose of the PIN algorithm is for enabling an active rehabilitation exercise, which is implemented by estimating the patient's motion and classifying the intention. Existing rehabilitation involves the inconvenience of attaching a sensor directly to the patient's skin. In addition, the rehabilitation device moves the patient, which is a passive rehabilitation method. Our algorithm consists of two steps. First, we estimate the user's joint position through the OpenPose algorithm, which is efficient in estimating 2D human pose in an image. Second, an intention classifier is constructed for classifying the motions into three categories, and a sequence of images including joint information is used as input. The intention network also learns correlations between joints and changes in joints over a short period of time, which can be easily used to determine the intention of the motion. To implement the proposed algorithm and conduct real-world experiments, we collected our own dataset, which is composed of videos of three classes. The network is trained using short segment clips of the video. Experimental results demonstrate that the proposed algorithm is effective for classifying intentions based on a short video clip.