DOI QR코드

DOI QR Code

깊이정보와 역변환 기반의 포인트 클라우드 렌더링 품질 향상 방법

Rendering Quality Improvement Method based on Depth and Inverse Warping

  • 이희제 (한양대학교 컴퓨터소프트웨어학과) ;
  • 윤준영 (한양대학교 컴퓨터소프트웨어학과) ;
  • 박종일 (한양대학교 컴퓨터소프트웨어학과)
  • Lee, Heejea (Department of Computer Science, Hanyang University) ;
  • Yun, Junyoung (Department of Computer Science, Hanyang University) ;
  • Park, Jong-Il (Department of Computer Science, Hanyang University)
  • 투고 : 2021.09.06
  • 심사 : 2021.11.05
  • 발행 : 2021.11.30

초록

포인트 클라우드 콘텐츠는 실제 환경 및 물체를 3차원 위치정보를 갖는 점들과 그에 대응되는 색상 등을 획득하여 기록한 실감 콘텐츠이다. 위치와 색상 정보로 구성된 3차원 점으로 이루어진 포인트 클라우드 콘텐츠는 확대하여 렌더링 할 경우 점과 점 사이의 간격이 벌어지면서 빈 구멍이 발생하게 된다. 본 논문에서는 포인트 클라우드 확대 시 점들 간 간격이 벌어져 생기는 구멍을 찾고 구멍에 대해 깊이정보를 활용한 역 변환 기반 보간 방법을 통해 포인트 클라우드 콘텐츠 품질을 개선하는 방법을 제안한다. 영상의 확대나 카메라 근접 등으로 포인트들의 간격이 벌어지면 틈이 생기면서 구멍 사이에 뒷면의 포인트들이 렌더링 되어 보간 방법을 적용하는데 방해요소로 작용한다. 이를 해결하기 위해 포인트 클라우드의 뒷면에 해당하는 점들을 제거한다. 다음으로 빈 구멍이 발생한 시점의 깊이 맵(depth map)을 추출한다. 마지막으로 역 변환을 하여 원본의 데이터에서 픽셀을 추출한다. 제안하는 방법으로 콘텐츠를 렌더링한 결과, 기존의 크기를 늘려 빈 영역을 채우는 방법에 비해 렌더링 품질이 평균 PSNR 측면에서 1.2 dB 향상된 결과를 보였다.

The point cloud content is immersive content recorded by acquiring points and colors corresponding to the real environment and objects having three-dimensional location information. When a point cloud content consisting of three-dimensional points having position and color information is enlarged and rendered, the gap between the points widens and an empty hole occurs. In this paper, we propose a method for improving the quality of point cloud contents through inverse transformation-based interpolation using depth information for holes by finding holes that occur due to the gap between points when expanding the point cloud. The points on the back are rendered between the holes created by the gap between the points, acting as a hindrance to applying the interpolation method. To solve this, remove the points corresponding to the back side of the point cloud. Next, a depth map at the point in time when an empty hole is generated is extracted. Finally, inverse transform is performed to extract pixels from the original data. As a result of rendering content by the proposed method, the rendering quality improved by 1.2 dB in terms of average PSNR compared to the conventional method of increasing the size to fill the blank area.

키워드

과제정보

이 논문은 2021년도 정부(과학기술정보통신부)의 재원으로 정보통신기획평가원의 지원을 받아 수행된 연구임(No.2020-0-00452, 적응형 뷰어 중심 포인트 클라우드AR/VR 스트리밍 플랫폼 기술 개발)

참고문헌

  1. BHAYANI, Sam B.; ANDRIOLE, Gerald L. Three-dimensional (3D) vision: does it improve laparoscopic skills? An assessment of a 3D head-mounted visualization system. Reviews in urology, 7.4: 211, 2005.
  2. D. B. Payne and J. R. Stern, "Wavelength-switched passively coupled single-mode optical network," in Proc. IOOC-ECOC, Boston, MA, USA, pp. 585-590, 1985.
  3. BOTSCH, Mario, et al. Polygon mesh processing. CRC press, 2010.
  4. MITRA, Niloy J., et al. Registration of point cloud data from a geometric optimization perspective. In: Proceedings of the 2004 Eurographics/ACM SIGGRAPH symposium on Geometry processing, p. 22-31, 2004.
  5. SCHUETZ, Markus; WIMMER, Michael. Progressive real-time rendering of unprocessed point clouds. In: ACM SIGGRAPH 2018 Posters, p. 1-2, 2018.
  6. YUN, Junyoung; KIM, Jongwook; PARK, Jong-Il. Circular Splats Based Visualization for Point Cloud Contents. In: Proceedings of the Korean Society of Broadcast Engineers Conference. The Korean Institute of Broadcast and Media Engineers, 2020. p. 276-278.
  7. ROSENTHAL, Paul; LINSEN, Lars. Image-space point cloud rendering. In: Proceedings of Computer Graphics International. 2008. p. 136-143.
  8. Schuetz, Markus, and Michael Wimmer. "Progressive realtime rendering of unprocessed point clouds." ACM SIGGRAPH 2018 Posters. 2018. 1-2.
  9. Zwicker, Matthias, et al. "EWA splatting." IEEE Transactions on Visualization and Computer Graphics 8.3 (2002): 223-238. https://doi.org/10.1109/TVCG.2002.1021576
  10. Eugene d'Eon, Bob Harrison, Taos Myers, and Philip A. Chou, "8i Voxelized Full Bodies, version 2 - A Voxelized Point Cloud Dataset," ISO/IEC JTC1/SC29 Joint WG11/WG1 (MPEG/JPEG) input documentm40059/M74006, Geneva, January 2017.
  11. ZWICKER, Matthias, et al. EWA splatting. IEEE Transactions on Visualization and Computer Graphics, 8.3: 223-238, 2002. https://doi.org/10.1109/TVCG.2002.1021576
  12. SCHAUFLER, Gernot; PRIGLINGER, Markus. Efficient displacement mapping by image warping. In: Eurographics Workshop on Rendering Techniques. Springer, Vienna, 1999. p. 175-186.
  13. WU, Enhua; ZHENG, Xin. Composition of novel views through an efficient image warping. The Visual Computer, 2003, 19.5: 319-328. https://doi.org/10.1007/s00371-002-0183-x
  14. YU, Lequan, et al. Pu-net: Point cloud upsampling network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, p. 2790-2799, 2018.
  15. MIELEKAMP, Pieter Maria; HOMAN, Robert Johannes Frederik. Visualization of 3D images in combination with 2D projection images. U.S. Patent No 7,991,105, 2011.
  16. HOUSHIAR, Hamidreza; NUCHTER, Andreas. 3D point cloud compression using conventional image compression for efficient data transmission. In: 2015 XXV International Conference on Information, Communication and Automation Technologies (ICAT). IEEE, p. 1-8, 2015.
  17. WANG, Weidong; GRINSTEIN, Georges G. A survey of 3D solid reconstruction from 2D projection line drawings. In: Computer Graphics Forum. Edinburgh, UK: Blackwell Science Ltd, 1993. p. 137-158.
  18. Call for Proposals for Point Cloud Compression v2, Standard ISO/IECJTC1/SC29/WG11 MPEG2017/N16763, Hobart, AU, Apr. 2017.
  19. MIAO, Yongwei; PAJAROLA, Renato; FENG, Jieqing. Curvature-aw are adaptive re-sampling for point-sampled geometry. Computer-Aided Design, 2009, 41.6: 395-403, 2009. https://doi.org/10.1016/j.cad.2009.01.006
  20. HUYNH-THU, Quan; GHANBARI, Mohammed. Scope of validity of PSNR in image/video quality assessment. Electronics letters, 2008, 44. 13: 800-801. https://doi.org/10.1049/el:20080522
  21. DAVIS, Philip J. Interpolation and approximation. Courier Corporation, 1975.