• Title/Summary/Keyword: Blendshape

Search Result 5, Processing Time 0.02 seconds

A Study on Facial Blendshape Rig Cloning Method Based on Deformation Transfer Algorithm (메쉬 변형 전달 기법을 통한 블렌드쉐입 페이셜 리그 복제에 대한 연구)

  • Song, Jaewon;Im, Jaeho;Lee, Dongha
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.9
    • /
    • pp.1279-1284
    • /
    • 2021
  • This paper addresses the task of transferring facial blendshape models to an arbitrary target face. Blendshape is a common method for the facial rig; however, production of blendshape rig is a time-consuming process in the current facial animation pipeline. We propose automatic blendshape facial rigging based on our blendshape transfer method. Our method computes the difference between source and target facial model and then transfers the source blendshape to the target face based on a deformation transfer algorithm. Our automatic method provides efficient production of a controllable digital human face; the results can be applied to various applications such as games, VR chating, and AI agent services.

A Study on the Fabrication of Facial Blend Shape of 3D Character - Focusing on the Facial Capture of the Unreal Engine (3D 캐릭터의 얼굴 블렌드쉐입(blendshape)의 제작연구 -언리얼 엔진의 페이셜 캡처를 중심으로)

  • Lou, Yi-Si;Choi, Dong-Hyuk
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.8
    • /
    • pp.73-80
    • /
    • 2022
  • Facial expression is an important means of representing characteristics in movies and animations, and facial capture technology can support the production of facial animation for 3D characters more quickly and effectively. Blendshape techniques are the most widely used methods for producing high-quality 3D face animations, but traditional blendshape often takes a long time to produce. Therefore, the purpose of this study is to achieve results that are not far behind the effectiveness of traditional production to reduce the production period of blend shape. In this paper, in order to make a blend shape, the method of using the cross-model to convey the blend shape is compared with the traditional method of making the blend shape, and the validity of the new method is verified. This study used kit boy developed by Unreal Engine as an experiment target conducted a facial capture test using two blend shape production techniques, and compared and analyzed the facial effects linked to blend shape.

A Study on the Correction of Face Motion Recognition Data Using Kinect Method (키넥트 방식을 활용한 얼굴모션인식 데이터 제어에 관한 연구)

  • Lee, Junsang;Park, Junhong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.513-515
    • /
    • 2019
  • Techniques to recognize depth values using Kinect infrared projectors continue to evolve. Techniques to track human movements are being developed from the Marcris method to the Bimarris method. Capture of facial movement using Kinect has disadvantages that are not sophisticated. In addition, a method to control the gestures and movements on the face in real time requires much research. Therefore, this paper proposes a technique to create natural 3D image contents by studying technology to apply and control branding technology to extracted face recognition data using Kinect infrared method.

  • PDF

A Study on Korean Speech Animation Generation Employing Deep Learning (딥러닝을 활용한 한국어 스피치 애니메이션 생성에 관한 고찰)

  • Suk Chan Kang;Dong Ju Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.461-470
    • /
    • 2023
  • While speech animation generation employing deep learning has been actively researched for English, there has been no prior work for Korean. Given the fact, this paper for the very first time employs supervised deep learning to generate Korean speech animation. By doing so, we find out the significant effect of deep learning being able to make speech animation research come down to speech recognition research which is the predominating technique. Also, we study the way to make best use of the effect for Korean speech animation generation. The effect can contribute to efficiently and efficaciously revitalizing the recently inactive Korean speech animation research, by clarifying the top priority research target. This paper performs this process: (i) it chooses blendshape animation technique, (ii) implements the deep-learning model in the master-servant pipeline of the automatic speech recognition (ASR) module and the facial action coding (FAC) module, (iii) makes Korean speech facial motion capture dataset, (iv) prepares two comparison deep learning models (one model adopts the English ASR module, the other model adopts the Korean ASR module, however both models adopt the same basic structure for their FAC modules), and (v) train the FAC modules of both models dependently on their ASR modules. The user study demonstrates that the model which adopts the Korean ASR module and dependently trains its FAC module (getting 4.2/5.0 points) generates decisively much more natural Korean speech animations than the model which adopts the English ASR module and dependently trains its FAC module (getting 2.7/5.0 points). The result confirms the aforementioned effect showing that the quality of the Korean speech animation comes down to the accuracy of Korean ASR.

A Study of Facial Expression of Digital Character with Muscle Simulation System

  • He, Yangyang;Choi, Chul-young
    • International journal of advanced smart convergence
    • /
    • v.8 no.2
    • /
    • pp.162-169
    • /
    • 2019
  • Facial rigging technology has been developing more and more since the 21st century. Facial rigging of various methods is still attempted and a technique of capturing the geometry in real time recently also appears. Currently Modern CG is produced image which is hard to distinguish from actual photograph. However, this kind of technology still requires a lot of equipment and cost. The purpose of this study is to perform facial rigging using muscle simulation instead of using such equipment. Original muscle simulations were made primarily for use in the body of a creature. In this study, however, we use muscle simulations for facial rigging to create a more realistic creature-like effect. To do this, we used Ziva Dynamics' Ziva VFX muscle simulation software. We also develop a method to overcome the disadvantages of muscle simulation. Muscle simulation can not be applied in real time and it takes time to simulate. It also takes a long time to work because the complex muscles must be connected. Our study have solved this problem using blendshape and we want to show you how to apply our method to face rig.