DOI QR코드

DOI QR Code

2D/3D image Conversion Method using Simplification of Level and Reduction of Noise for Optical Flow and Information of Edge

Optical flow의 레벨 간소화 및 노이즈 제거와 에지 정보를 이용한 2D/3D 변환 기법

  • Received : 2011.12.12
  • Accepted : 2012.02.10
  • Published : 2012.02.29

Abstract

In this paper, we propose an improved optical flow algorithm which reduces computational complexity as well as noise level. This algorithm reduces computational time by applying level simplification technique and removes noise by using eigenvectors of objects. Optical flow is one of the accurate algorithms used to generate depth information from two image frames using the vectors which track the motions of pixels. This technique, however, has disadvantage of taking very long computational time because of the pixel-based calculation and can cause some noise problems. The level simplifying technique is applied to reduce the computational time, and the noise is removed by applying optical flow only to the area of having eigenvector, then using the edge image to generate the depth information of background area. Three-dimensional images were created from two-dimensional images using the proposed method which generates the depth information first and then converts into three-dimensional image using the depth information and DIBR(Depth Image Based Rendering) technique. The error rate was obtained using the SSIM(Structural SIMilarity index).

본 논문은 2D/3D 변환에서 깊이정보 생성을 위해 연산량을 감소시키는 레벨 간소화 기법을 적용하고 객체의 고유벡터를 이용하여 노이즈를 제거한 Optical flow를 이용하는 방법을 제안한다. Optical flow는 깊이정보를 생성하기 위한 방법 중 하나로 두 프레임간의 픽셀의 변화 벡터 값을 나타내어 움직임 정보를 나타내며 픽셀 단위로 처리하므로 정확도가 높다. 그러나 픽셀 단위 연산으로 긴 연산 시간이 소요되며 모든 픽셀을 연산하는 특성상 노이즈가 생길 수 있는 문제점이 있다. 본 논문에서는 이를 해결하기 위해 레벨 간소화 과정을 거쳐 연산 시간을 단축하였고 Optical flow를 영상에서 고유벡터를 갖는 영역에만 적용하여 노이즈를 제거한 뒤 배경 영역에 대한 깊이 정보를 에지 영상을 이용하여 생성하는 방법을 제안하였다. 제안한 방법으로 깊이정보를 생성한 뒤 DIBR(Depth Image Based Rendering)으로 2차원 영상을 3차원 입체 영상으로 변환하였고 SSIM(Structural SIMilarity index)으로 최종 생성된 영상의 오차율을 분석하였다.

Keywords

References

  1. Xun Cao, Zheng Li, and Qionghai Dai, "Semi-Automatic 2D-to-3D Conversion Using Disparity Propagation", IEEE TRANSACTIONS ON BROADCASTING, VOL. 57, NO. 2, pp. 491-499, JUNE 2011. https://doi.org/10.1109/TBC.2011.2127650
  2. Xiaojun Huang, Lianghao Wang, Junjun Huang, Dongxiao Li, Ming Zhang, "A Depth Extraction Method Based On Motion and Geometry for 2D to 3D Conversion", 2009 Third International Symposium on Intelligent Information Technology Application, pp. 294-298 Nov 2009.
  3. Guo-Shiang Lin Cheng-Ying Yeh Wei-Chih Chen Wen-Nung Lie, "A 2D to 3D conversion scheme based on depth cues analysis for MPEG videos", IEEE International Conference on Multimedia and Expo ICME, pp. 1141-1145, 2010.
  4. A. S. Ogale, C. Fermüller, and Y. Aloimonos, "Motion segmentation using occlusions," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 6, pp. 988-992, 2005. https://doi.org/10.1109/TPAMI.2005.123
  5. Hyeon-Ho Han, Yeong-Pyo Hong, Jin-Su Kim, Sang-Hun Lee, "2D/3D image Conversion Method using Object Segmentation for Decrease Processing and Create Depth Map", Proceedings of the KAIS Fall conference, Vol.11 No. 2 pp. 92-95, 2010
  6. Deqing Sun, Roth, S., Black, M. J., "Secret of Optical Flow Estimation and Their Principles", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2432-2439, 2010.
  7. Hyeon-Ho Han, Gang-Seong Lee, Sang-Hun Lee, "A Study on 2D/3D image Conversion Method using Create Depth Map", Journal of The Korea Academia-Industrial cooperation Society, Vol. 12, No. 4 pp. 1897-1903, 2011 https://doi.org/10.5762/KAIS.2011.12.4.1897
  8. Jianbo Shi and Jitendra Malik, "Normalized Cuts and Image Segmentation", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 22, NO. 8, 2000.
  9. W.-Y. Chen and Y.-L. Chang and S.-F. Lin and L.-F. Ding and L.-G. Chen, "Efficient depth image based rendering with edge dependent depth filter and interpolation," in Proc. ICME, pp. 1314-1317, 2005.

Cited by

  1. A new method to create depth information based on lighting analysis for 2D/3D conversion vol.20, pp.10, 2013, https://doi.org/10.1007/s11771-013-1788-0