DOI QR코드

DOI QR Code

Non-rigid 3D Shape Recovery from Stereo 2D Video Sequence

스테레오 2D 비디오 영상을 이용한 비정형 3D 형상 복원

  • Received : 2015.12.15
  • Accepted : 2016.01.20
  • Published : 2016.02.29

Abstract

The natural moving objects are the most non-rigid shapes with randomly time-varying deformation, and its types also very diverse. Methods of non-rigid shape reconstruction have widely applied in field of movie or game industry in recent years. However, a realistic approach requires moving object to stick many beacon sets. To resolve this drawback, non-rigid shape reconstruction researches from input video without beacon sets are investigated in multimedia application fields. In this regard, our paper propose novel CPSRF(Chained Partial Stereo Rigid Factorization) algorithm that can reconstruct a non-rigid 3D shape. Our method is focused on the real-time reconstruction of non-rigid 3D shape and motion from stereo 2D video sequences per frame. And we do not constrain that the deformation of the time-varying non-rigid shape is limited by a Gaussian distribution. The experimental results show that the 3D reconstruction performance of the proposed CPSRF method is superior to that of the previous method which does not consider the random deformation of shape.

움직임이 자연스러운 피사체는 대부분 형태가 불규칙하게 변형되는 비정형(non-rigid) 형상이고, 그 종류 또한 매우 다양하게 존재하다. 비정형 형상 복원에 관한 기술은 영화나 게임 산업에서 최근 폭넓게 적용되고 있다. 그렇지만, 현실적인 접근 방법은 움직이는 피사체에 많은 비콘 장치를 부착해야한다. 이러한 제약사항을 극복하기 위해, 비콘장치가 없는 입력 비디오 영상으로부터 비정형 형상을 복원하는 연구가 멀티미디어 응용 분야에서 광범위하게 진행되고 있다. 이러한 관점에서 본 논문은 비정형 3D 형상을 복원할 수 있는 새로운 CPSRF(Chained Partial Stereo Rigid Factorization) 알고리즘을 제안한다. 본 방법은 스테레오 2D 비디오 영상으로부터 비정형 3D 형상을 프레임 별로 실시간 복원하는데 포커스 한다. 또한 시변 형상 변형은 가우시한 분포를 따라야 하는 제한을 두지 않는다. 실험결과에서는 제안한 CPSRF 방법의 복원 성능이 불규칙한 형상 변형을 고려하지 않은 기존 방법 보다 우수함을 확인한다.

Keywords

References

  1. C. Fehn, P. Kauff, M. Op de Beeck, F. Ernst, W. I.Jsselsteijn, M. Polleys, L. Van Gool, E. Ofek and I. Sexton, "An Evolutionary and Optimized Approach on 3D-TV," in Proceeding of IBC2002, pp.357-365, 2002.
  2. M. op de Beeck, P. Wilinski, C. Fehn, P. Kauff, W. I.Jsselsteijn, M. Polleys, L. Van Gool, E. Ofek and I. Sexton, "Towards an Optimized 3D Broadcast Chain," in Proceeding of SPIE ITCom2002, pp.357-365, 2002.
  3. S. Kunic,Z. Sego, "3D television," in Proceeding of ELMAR2011, pp.127-131, 2011.
  4. Y. Feng, J. Ren, J. Jiang , "Object-Based 2D-to-3D Video Conversion for Effective Stereoscopic Content Generation in 3D-TV Applications," IEEE Transactions on Broadcasting, vol.57-2-2, pp.500-509, Jun. 2011. https://doi.org/10.1109/TBC.2011.2131030
  5. L, Zhang, C. Vazquez, S. Knorr, "3D-TV Content Creation: Automatic 2D-to-3D Video Conversion," IEEE Transactions on Broadcasting, vol.57-2-2, pp.372-383, Jun. 2011. https://doi.org/10.1109/TBC.2011.2122930
  6. C. Bregler, A. Hertzmann, and H. Biermann. "Recovering non-rigid 3d shape from image streams," in Proceeding of CVPR2000, pp.690-696, 2000.
  7. C. Tomasi and T. Kanade, "Shape and motion from image streams under orthography: A factorization method," International Journal of Computer Vision, vol.9, no.2, pp.137-154, Nov. 1992. https://doi.org/10.1007/BF00129684
  8. J. Xiao, J. Chai, and T. Kanade, "A Closed-Form Solution to Non-rigid Shape and Motion Recovery," in Proceeding of CVPR2004, pp.573-587, 2004.
  9. Z. Ghahramani and G. E. Hinton, "The EM Algorithm for Mixtures of Factor Analyzers," University of Toronto, Toronto: CA, Technical Report CRG-TR-96-1, 1996.
  10. Lorenzo Torresani, Aaron Hertzmann and Christoph Bregler, "Learning Non-Rigid 3D Shape from 2D Motion," in Proceeding of NIPS2004, pp.1555-1562, 2004.
  11. A. Gruber and Y. Weiss, "Factorization with Uncertainty and Missing Data : Exploiting Temporal Coherence," in Proceeding of NIPS2003, pp.1507-1514, 2003.
  12. J. Starck and A. Hilton. "Surface capture for performance based animation," Computer Graphics and Applications, vol.27(3), pp.21-31, May-Jun. 2007.
  13. C. Wu, C. Stoll, L. Valgaerts, and C. Theobalt. "On-set performance capture of multiple actors with a stereo camera," ACM Transaction on Graph, vol.32(6) article no.161, Nov. 2013.