• Title/Summary/Keyword: Real-time facial motion capture

Search Result 11, Processing Time 0.024 seconds

Real-time Markerless Facial Motion Capture of Personalized 3D Real Human Research

  • Hou, Zheng-Dong;Kim, Ki-Hong;Lee, David-Junesok;Zhang, Gao-He
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.129-135
    • /
    • 2022
  • Real human digital models appear more and more frequently in VR/AR application scenarios, in which real-time markerless face capture animation of personalized virtual human faces is an important research topic. The traditional way to achieve personalized real human facial animation requires multiple mature animation staff, and in practice, the complex process and difficult technology may bring obstacles to inexperienced users. This paper proposes a new process to solve this kind of work, which has the advantages of low cost and less time than the traditional production method. For the personalized real human face model obtained by 3D reconstruction technology, first, use R3ds Wrap to topology the model, then use Avatary to make 52 Blend-Shape model files suitable for AR-Kit, and finally realize real-time markerless face capture 3D real human on the UE4 platform facial motion capture, this study makes rational use of the advantages of software and proposes a more efficient workflow for real-time markerless facial motion capture of personalized 3D real human models, The process ideas proposed in this paper can be helpful for other scholars who study this kind of work.

Real-time Facial Modeling and Animation based on High Resolution Capture (고해상도 캡쳐 기반 실시간 얼굴 모델링과 표정 애니메이션)

  • Byun, Hae-Won
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.8
    • /
    • pp.1138-1145
    • /
    • 2008
  • Recently, performance-driven facial animation has been popular in various area. In television or game, it is important to guarantee real-time animation for various characters with different appearances between a performer and a character. In this paper, we present a new facial animation approach based on motion capture. For this purpose, we address three issues: facial expression capture, expression mapping and facial animation. Finally, we show the results of various examination for different types of face models.

  • PDF

Comparative Analysis of Markerless Facial Recognition Technology for 3D Character's Facial Expression Animation -Focusing on the method of Faceware and Faceshift- (3D 캐릭터의 얼굴 표정 애니메이션 마커리스 표정 인식 기술 비교 분석 -페이스웨어와 페이스쉬프트 방식 중심으로-)

  • Kim, Hae-Yoon;Park, Dong-Joo;Lee, Tae-Gu
    • Cartoon and Animation Studies
    • /
    • s.37
    • /
    • pp.221-245
    • /
    • 2014
  • With the success of the world's first 3D computer animated film, "Toy Story" in 1995, industrial development of 3D computer animation gained considerable momentum. Consequently, various 3D animations for TV were produced; in addition, high quality 3D computer animation games became common. To save a large amount of 3D animation production time and cost, technological development has been conducted actively, in accordance with the expansion of industrial demand in this field. Further, compared with the traditional approach of producing animations through hand-drawings, the efficiency of producing 3D computer animations is infinitely greater. In this study, an experiment and a comparative analysis of markerless motion capture systems for facial expression animation has been conducted that aims to improve the efficiency of 3D computer animation production. Faceware system, which is a product of Image Metrics, provides sophisticated production tools despite the complexity of motion capture recognition and application process. Faceshift system, which is a product of same-named Faceshift, though relatively less sophisticated, provides applications for rapid real-time motion recognition. It is hoped that the results of the comparative analysis presented in this paper become baseline data for selecting the appropriate motion capture and key frame animation method for the most efficient production of facial expression animation in accordance with production time and cost, and the degree of sophistication and media in use, when creating animation.

A Study of Facial Expression of Digital Character with Muscle Simulation System

  • He, Yangyang;Choi, Chul-young
    • International journal of advanced smart convergence
    • /
    • v.8 no.2
    • /
    • pp.162-169
    • /
    • 2019
  • Facial rigging technology has been developing more and more since the 21st century. Facial rigging of various methods is still attempted and a technique of capturing the geometry in real time recently also appears. Currently Modern CG is produced image which is hard to distinguish from actual photograph. However, this kind of technology still requires a lot of equipment and cost. The purpose of this study is to perform facial rigging using muscle simulation instead of using such equipment. Original muscle simulations were made primarily for use in the body of a creature. In this study, however, we use muscle simulations for facial rigging to create a more realistic creature-like effect. To do this, we used Ziva Dynamics' Ziva VFX muscle simulation software. We also develop a method to overcome the disadvantages of muscle simulation. Muscle simulation can not be applied in real time and it takes time to simulate. It also takes a long time to work because the complex muscles must be connected. Our study have solved this problem using blendshape and we want to show you how to apply our method to face rig.

Face-to-face Communication in Cyberspace using Analysis and Synthesis of Facial Expression

  • Shigeo Morishima
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.111-118
    • /
    • 1999
  • Recently computer can make cyberspace to walk through by an interactive virtual reality technique. An a avatar in cyberspace can bring us a virtual face-to-face communication environment. In this paper, an avatar is realized which has a real face in cyberspace and a multiuser communication system is constructed by voice transmitted through network. Voice from microphone is transmitted and analyzed, then mouth shape and facial expression of avatar are synchronously estimated and synthesized on real time. And also an entertainment application of a real-time voice driven synthetic face is introduced and this is an example of interactive movie. Finally, face motion capture system using physics based face model is introduced.

Realtime Facial Expression Control and Projection of Facial Motion Data using Locally Linear Embedding (LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.117-124
    • /
    • 2007
  • This paper describes methodology that enables animators to create the facial expression animations and to control the facial expressions in real-time by reusing motion capture datas. In order to achieve this, we fix a facial expression state expression method to express facial states based on facial motion data. In addition, by distributing facial expressions into intuitive space using LLE algorithm, it is possible to create the animations or to control the expressions in real-time from facial expression space using user interface. In this paper, approximately 2400 facial expression frames are used to generate facial expression space. In addition, by navigating facial expression space projected on the 2-dimensional plane, it is possible to create the animations or to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In order to distribute approximately 2400 facial expression data into intuitional space, there is need to represents the state of each expressions from facial expression frames. In order to achieve this, the distance matrix that presents the distances between pairs of feature points on the faces, is used. In order to distribute this datas, LLE algorithm is used for visualization in 2-dimensional plane. Animators are told to control facial expressions or to create animations when using the user interface of this system. This paper evaluates the results of the experiment.

Realtime Facial Expression Control of 3D Avatar by Isomap of Motion Data (모션 데이터에 Isomap을 사용한 3차원 아바타의 실시간 표정 제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.3
    • /
    • pp.9-16
    • /
    • 2007
  • This paper describe methodology that is distributed on 2-dimensional plane to much high-dimensional facial motion datas using Isomap algorithm, and user interface techniques to control facial expressions by selecting expressions while user navigates this space in real-time. Isomap algorithm is processed of three steps as follow; first define an adjacency expression of each expression data, and second, calculate manifold distance between each expressions and composing expression spaces. These facial spaces are created by calculating of the shortest distance(manifold distance) between two random expressions. We have taken a Floyd algorithm for it. Third, materialize multi-dimensional expression spaces using Multidimensional Scaling, and project two dimensions plane. The smallest adjacency distance to define adjacency expressions uses Pearson Correlation Coefficient. Users can control facial expressions of 3-dimensional avatar by using user interface while they navigates two dimension spaces by real-time.

Realtime Facial Expression Control of 3D Avatar by PCA Projection of Motion Data (모션 데이터의 PCA투영에 의한 3차원 아바타의 실시간 표정 제어)

  • Kim Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1478-1484
    • /
    • 2004
  • This paper presents a method that controls facial expression in realtime of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. The space of expression is created from about 2400 frames of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. Facial expression of 3D avatar is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the Principal Component Analysis(PCA) projection. To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. This paper evaluates the results.

  • PDF

A Study on the Correction of Face Motion Recognition Data Using Kinect Method (키넥트 방식을 활용한 얼굴모션인식 데이터 제어에 관한 연구)

  • Lee, Junsang;Park, Junhong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.513-515
    • /
    • 2019
  • Techniques to recognize depth values using Kinect infrared projectors continue to evolve. Techniques to track human movements are being developed from the Marcris method to the Bimarris method. Capture of facial movement using Kinect has disadvantages that are not sophisticated. In addition, a method to control the gestures and movements on the face in real time requires much research. Therefore, this paper proposes a technique to create natural 3D image contents by studying technology to apply and control branding technology to extracted face recognition data using Kinect infrared method.

  • PDF

Development of a Serious Game using EEG Monitor and Kinect (뇌파측정기와 키넥트를 이용한 기능성 게임 개발)

  • Jung, Sang-Hyub;Han, Seung-Wan;Kim, Hyo-Chan;Kim, Ki-Nam;Song, Min-Sun;Lee, Kang-Hee
    • Journal of Korea Game Society
    • /
    • v.15 no.4
    • /
    • pp.189-198
    • /
    • 2015
  • This paper is about a serious game controlled by EEG and motion capture. We developed our game for 2 users competitive and its method is as follows. One player uses a controlling interface using EEG signals based on the premise that the player's facial movements are a depiction of the player's emotion and intensity throughout the game play. The other player uses a controlling interface using kinect's motion capture technology which captures the player's vertical and lateral movements as well as state of running. The game shows the first player's EEG as a real-time graphic along the map on the game screen. The player will then be able to pace himself based on these visualization graphics of his brain activities. This results in higher concentration for the player throughout the game for a better score in the game. In addition, the second player will be able to improve his physical abilities since the game action is based on real movements from the player.