• Title/Summary/Keyword: Facial appearance

Search Result 295, Processing Time 0.031 seconds

Facial Data Visualization for Improved Deep Learning Based Emotion Recognition

  • Lee, Seung Ho
    • Journal of Information Science Theory and Practice
    • /
    • v.7 no.2
    • /
    • pp.32-39
    • /
    • 2019
  • A convolutional neural network (CNN) has been widely used in facial expression recognition (FER) because it can automatically learn discriminative appearance features from an expression image. To make full use of its discriminating capability, this paper suggests a simple but effective method for CNN based FER. Specifically, instead of an original expression image that contains facial appearance only, the expression image with facial geometry visualization is used as input to CNN. In this way, geometric and appearance features could be simultaneously learned, making CNN more discriminative for FER. A simple CNN extension is also presented in this paper, aiming to utilize geometric expression change derived from an expression image sequence. Experimental results on two public datasets (CK+ and MMI) show that CNN using facial geometry visualization clearly outperforms the conventional CNN using facial appearance only.

Acromegaloid Facial Appearance Syndrome - A New Case in India

  • Rai, Arpita;Sattur, Atul P.;Naikmasur, Venkatesh G.
    • Journal of Genetic Medicine
    • /
    • v.10 no.1
    • /
    • pp.57-61
    • /
    • 2013
  • Acromegaloid Facial Appearance syndrome is a very rare syndrome combining acromegaloid-like facial appearance, thickened lips and oral mucosa and acral enlargement. Progressive facial dysmorphism is characterized by a coarse facies, a long bulbous nose, high-arched eyebrows, and thickening of the lips, oral mucosa leading to exaggerated rugae and frenula, furrowed tongue and narrow palpebral fissures. We report a case of acromegaloid facial appearance syndrome in a 19-year-old male patient who presented with all the characteristic features of the syndrome along with previously unreported anomalies like dystrophic nails, postaxial polydactyly and incisal notching of teeth.

The Functionality of Facial Appearance and Its Importance to a Korean Population

  • Kim, Young Jun;Park, Jang Wan;Kim, Jeong Min;Park, Sun Hyung;Hwang, Jae Ha;Kim, Kwang Seog;Lee, Sam Yong;Shin, Jun Ho
    • Archives of Plastic Surgery
    • /
    • v.40 no.6
    • /
    • pp.715-720
    • /
    • 2013
  • Background Many people have an interest in the correction of facial scars or deformities caused by trauma. The increasing ability to correct such flaws has been one of the reasons for the increase in the popularity of facial plastic surgery. In addition to its roles in communication, breathing, eating, olfaction and vision, the appearance of the face also plays an important role in human interactions, including during social activities. However, studies on the importance of the functional role of facial appearance. As a function of the face are scare. Therefore, in the present study, we evaluated the importance of the functions of the face in Korea. Methods We conducted an online panel survey of 300 participants (age range, 20-70 years). Each respondent was administered the demographic data form, Facial Function Assessment Scale, Rosenberg Self-Esteem Scale, and standard gamble questionnaires. Results In the evaluation on the importance of facial functions, a normal appearance was considered as important as communication, breathing, speech, and vision. Of the 300 participants, 85% stated that a normal appearance is important in social activities. Conclusions The results of this survey involving a cross-section of the Korean population indicated that a normal appearance was considered one of the principal facial functions. A normal appearance was considered more important than the functions of olfaction and expression. Moreover, a normal appearance was determined to be an important facial function for leading a normal life in Korea.

Emotion Recognition based on Tracking Facial Keypoints (얼굴 특징점 추적을 통한 사용자 감성 인식)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.18 no.1
    • /
    • pp.97-101
    • /
    • 2019
  • Understanding and classification of the human's emotion play an important tasks in interacting with human and machine communication systems. This paper proposes a novel emotion recognition method by extracting facial keypoints, which is able to understand and classify the human emotion, using active Appearance Model and the proposed classification model of the facial features. The existing appearance model scheme takes an expression of variations, which is calculated by the proposed classification model according to the change of human facial expression. The proposed method classifies four basic emotions (normal, happy, sad and angry). To evaluate the performance of the proposed method, we assess the ratio of success with common datasets, and we achieve the best 93% accuracy, average 82.2% in facial emotion recognition. The results show that the proposed method effectively performed well over the emotion recognition, compared to the existing schemes.

Facial Feature Extraction using Multiple Active Appearance Model (Multiple Active Appearance Model을 이용한 얼굴 특징 추출 기법)

  • Park, Hyun-Jun;Kim, Kwang-Baek;Cha, Eui-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.8
    • /
    • pp.1201-1206
    • /
    • 2013
  • Active Appearance Model(AAM) is one of the facial feature extraction techniques. In this paper, we propose the Multiple Active Appearance Model(MAAM). Proposed method uses two AAMs. Each AAM trains using different training parameters. It causes that each AAM has different strong points. One AAM complements the weak points in the other AAM. We performed the facial feature extraction on the 100 images to verify the performance of MAAM. Experiment results show that MAAM gives more accurate results than AAM with less fitting iteration.

Facial Feature Tracking Using Adaptive Particle Filter and Active Appearance Model (Adaptive Particle Filter와 Active Appearance Model을 이용한 얼굴 특징 추적)

  • Cho, Durkhyun;Lee, Sanghoon;Suh, Il Hong
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.2
    • /
    • pp.104-115
    • /
    • 2013
  • For natural human-robot interaction, we need to know location and shape of facial feature in real environment. In order to track facial feature robustly, we can use the method combining particle filter and active appearance model. However, processing speed of this method is too slow. In this paper, we propose two ideas to improve efficiency of this method. The first idea is changing the number of particles situationally. And the second idea is switching the prediction model situationally. Experimental results is presented to show that the proposed method is about three times faster than the method combining particle filter and active appearance model, whereas the performance of the proposed method is maintained.

Recognition of Human Facial Expression in a Video Image using the Active Appearance Model

  • Jo, Gyeong-Sic;Kim, Yong-Guk
    • Journal of Information Processing Systems
    • /
    • v.6 no.2
    • /
    • pp.261-268
    • /
    • 2010
  • Tracking human facial expression within a video image has many useful applications, such as surveillance and teleconferencing, etc. Initially, the Active Appearance Model (AAM) was proposed for facial recognition; however, it turns out that the AAM has many advantages as regards continuous facial expression recognition. We have implemented a continuous facial expression recognition system using the AAM. In this study, we adopt an independent AAM using the Inverse Compositional Image Alignment method. The system was evaluated using the standard Cohn-Kanade facial expression database, the results of which show that it could have numerous potential applications.

Local Appearance-based Face Recognition Using SVM and PCA (SVM과 PCA를 이용한 국부 외형 기반 얼굴 인식 방법)

  • Park, Seung-Hwan;Kwak, No-Jun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.54-60
    • /
    • 2010
  • The local appearance-based method is one of the face recognition methods that divides face image into small areas and extracts features from each area of face image using statistical analysis. It collects classification results of each area and decides identity of a face image using a voting scheme by integrating classification results of each area of a face image. The conventional local appearance-based method divides face images into small pieces and uses all the pieces in recognition process. In this paper, we propose a local appearance-based method that makes use of only the relatively important facial components. The proposed method detects the facial components such as eyes, nose and mouth that differs much from person to person. In doing so, the proposed method detects exact locations of facial components using support vector machines (SVM). Based on the detected facial components, a number of small images that contain the facial parts are constructed. Then it extracts features from each facial component image using principal components analysis (PCA). We compared the performance of the proposed method to those of the conventional methods. The results show that the proposed method outperforms the conventional local appearance-based method while preserving the advantages of the conventional local appearance-based method.

Evaluation of the facial dimensions of young adult women with a preferred facial appearance

  • Kim, Sae Yong;Bayome, Mohamed;Park, Jae Hyun;Kook, Yoon-Ah;Kang, Ju Hee;Kim, Kang Hyuk;Moon, Hong-Beom
    • The korean journal of orthodontics
    • /
    • v.45 no.5
    • /
    • pp.253-260
    • /
    • 2015
  • Objective: The aim of this study was to evaluate the facial dimensions of young adult women with a preferred facial appearance and compare the results with those from the general population. Methods: Twenty-five linear, nine angular, and three area measurements were made and four ratios were calculated using a sample of standardized frontal and lateral photographs of 46 young adult women with a preferred facial appearance (Miss Korea group) and 44 young adult women from the general population (control group). Differences between the two groups were analyzed using multivariate analysis of variance (MANOVA). Results: Compared with the control group, the Miss Korea group exhibited a significantly greater facial height, total facial height (TFH; trichion-menton), facial width (tragus right-tragus left), facial depth (tragus-true vertical line), and trichion-nasion/TFH ratio and smaller subnasale-menton/TFH and facial width/TFH ratios. Furthermore, the control group had smaller intercanthal and interpupillary widths. Conclusions: The Miss Korea group exhibited longer, wider, and deeper faces compared with those from the general population. Furthermore, the Miss Korea group had larger eyes, longer but less protruded noses, longer and more retruded lower lips and chins, larger lip vermilion areas, and smaller labiomental angles. These results suggest that the latest trends in facial esthetics should be considered during diagnosis and treatment planning for young women with dentofacial abnormalities.

3D Emotional Avatar Creation and Animation using Facial Expression Recognition (표정 인식을 이용한 3D 감정 아바타 생성 및 애니메이션)

  • Cho, Taehoon;Jeong, Joong-Pill;Choi, Soo-Mi
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.9
    • /
    • pp.1076-1083
    • /
    • 2014
  • We propose an emotional facial avatar that portrays the user's facial expressions with an emotional emphasis, while achieving visual and behavioral realism. This is achieved by unifying automatic analysis of facial expressions and animation of realistic 3D faces with details such as facial hair and hairstyles. To augment facial appearance according to the user's emotions, we use emotional templates representing typical emotions in an artistic way, which can be easily combined with the skin texture of the 3D face at runtime. Hence, our interface gives the user vision-based control over facial animation of the emotional avatar, easily changing its moods.