Phased Visualization of Facial Expressions Space using FCM Clustering

FCM 클러스터링을 이용한 표정공간의 단계적 가시화

  • 김성호 (상지대학교 컴퓨터정보공학부)
  • Published : 2008.02.28


This paper presents a phased visualization method of facial expression space that enables the user to control facial expression of 3D avatars by select a sequence of facial frames from the facial expression space. Our system based on this method creates the 2D facial expression space from approximately 2400 facial expression frames, which is the set of neutral expression and 11 motions. The facial expression control of 3D avatars is carried out in realtime when users navigate through facial expression space. But because facial expression space can phased expression control from radical expressions to detail expressions. So this system need phased visualization method. To phased visualization the facial expression space, this paper use fuzzy clustering. In the beginning, the system creates 11 clusters from the space of 2400 facial expressions. Every time the level of phase increases, the system doubles the number of clusters. At this time, the positions of cluster center and expression of the expression space were not equal. So, we fix the shortest expression from cluster center for cluster center. We let users use the system to control phased facial expression of 3D avatar, and evaluate the system based on the results.


  1. D. Terzopoulos, B. Mones-Hattal, B. Hofer, F. Parke, D. Sweetland, and K. Waters, Facial animation : Past, present and future, Panel, SIGGRAPH 97, 1997.
  2. F. I. Parke and K. Waters, Computer facial animation, A. K. Peters, 1996.
  3. F. Douglas, N. Ulrich, "Analysis of co-articulation regions for performance-driven facial animation," Journal of Visualization and Computer Animation, Vol.15, pp.15-26, 2004.
  4. B. Guenter, C. Grimm, D. Wood, H. Malvar, and F. Pighin, "Making Faces," ACM SIGGRAPH 98 Conf., pp.55-66, 1998.
  5. D. Zhigang, P. Y. Chiang, F. Pamela, and N. Ulrich, "Animating blendshape faces by cross-mapping motion capture data," Proceedings of the 2006 symposium on Interactive 3D graphics and games, 2006, pp.43-48, 2006.
  6. C. Kouadio, P. Poulin, and P. Lachapelle, "Real-time facial animation based upon a bank of 3D facial expressions," Proc. Computer Animation 98, 1998.
  7. D. Vlasic, M. Brand, H. Pfister, and J. Popovic, "Face Transfer with Multilinear Models," ACM Transactions on Graphics(TOG), Vol.24, pp.426-433, 2005.
  8. J. Lee, J. Chai, P. S. A. Reitsma, J. K. Hodgins, and N. S. Pollard, "Interactive Control of Avatars Animated with Human Motion Data," ACM Transactions on Graphics(SIGGRAPH 2002), Vol.21, No.3, pp.491-500, 2002.
  9. R. W. Floyd, "Algorithm 97 : Shortest Path," CACM, Vol.5, p.345, 1962.
  10. S. Uprendra, "Social information filtering for music recommendation," Master's thesis, MIT, 1994.
  11. W. S. Torgerson, "Multidimensional Scaling: I. theory and method," Psychometrica., Vol.17, pp.401-419, 1952.
  12. T. Cox and M. Cox, Multidimensional Scaling, Chapman & Hall, London, 1994.
  13. H. J. Shin and J. H. Lee, "Motion Synthesis and Editing in Low-Dimensional Spaces," Computer Animation and Virtual Worlds(Special Issue:CASA 2006), John Wiley & Sons, Vol.17, pp.219-227, 2006.
  14. 김성호, "모션 데이터에 Isomap을 사용한 3차원 아바타의 실시간 표정제어", 한국콘텐츠학회논문지, 제7권, 제3호, pp.9-16, 2007.
  15. 김성호, "LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어", 한국콘텐츠학회논문지, 제7권, 제2호, pp.117-124, 2007.
  16. J. C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algorithms:, Plenum Press, New York, 1981.
  17. R. O. Duda and P. E. Hart, "Pattern Classification and Scene Analysis," Wiley, New York, 1973.
  18. J. C. Dunn, "A Fuzzy Relative of the ISODATA Process and Its Use in Detecting Compact Well-Separated Clusters," Journal of Cybernetics, Vol.3, pp.32-57, 1973.