• Title/Summary/Keyword: 3d depth image

Search Result 612, Processing Time 0.039 seconds

The Enhancement of the Boundary-Based Depth Image (경계 기반의 깊이 영상 개선)

  • Ahn, Yang-Keun;Hong, Ji-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.4
    • /
    • pp.51-58
    • /
    • 2012
  • Recently, 3D technology based on depth image is widely used in various fields including 3D space recognition, image acquisition, interaction, and games. Depth camera is used in order to produce depth image, various types of effort are made to improve quality of the depth image. In this paper, we suggests using area-based Canny edge detector to improve depth image in applying 3D technology based on depth camera. The suggested method provides improved depth image with pre-processing and post-processing by fixing image quality deterioration, which may take place in acquiring depth image in a limited environment. For objective image quality evaluation, we have confirmed that the image is improved by 0.42dB at maximum, by applying and comparing improved depth image to virtual view reference software. In addition, with DSCQS(Double Stimulus Continuous Quality Scale) evaluation method, we are reassured of the effectiveness of improved depth image through objective evaluation of subjective quality.

Real-Time 2D-to-3D Conversion for 3DTV using Time-Coherent Depth-Map Generation Method

  • Nam, Seung-Woo;Kim, Hye-Sun;Ban, Yun-Ji;Chien, Sung-Il
    • International Journal of Contents
    • /
    • v.10 no.3
    • /
    • pp.9-16
    • /
    • 2014
  • Depth-image-based rendering is generally used in real-time 2D-to-3D conversion for 3DTV. However, inaccurate depth maps cause flickering issues between image frames in a video sequence, resulting in eye fatigue while viewing 3DTV. To resolve this flickering issue, we propose a new 2D-to-3D conversion scheme based on fast and robust depth-map generation from a 2D video sequence. The proposed depth-map generation algorithm divides an input video sequence into several cuts using a color histogram. The initial depth of each cut is assigned based on a hypothesized depth-gradient model. The initial depth map of the current frame is refined using color and motion information. Thereafter, the depth map of the next frame is updated using the difference image to reduce depth flickering. The experimental results confirm that the proposed scheme performs real-time 2D-to-3D conversions effectively and reduces human eye fatigue.

Optical Resonance-based Three Dimensional Sensing Device and its Signal Processing (광공진 현상을 이용한 입체 영상센서 및 신호처리 기법)

  • Park, Yong-Hwa;You, Jang-Woo;Park, Chang-Young;Yoon, Heesun
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2013.10a
    • /
    • pp.763-764
    • /
    • 2013
  • A three-dimensional image capturing device and its signal processing algorithm and apparatus are presented. Three dimensional information is one of emerging differentiators that provides consumers with more realistic and immersive experiences in user interface, game, 3D-virtual reality, and 3D display. It has the depth information of a scene together with conventional color image so that full-information of real life that human eyes experience can be captured, recorded and reproduced. 20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented[1,2]. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical resonator'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation[3,4]. The optical resonator is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image (Figure 1). Suggested novel optical resonator enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously (Figure 2,3). The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical resonator design, fabrication, 3D camera system prototype and signal processing algorithms.

  • PDF

Effects of Depth Map Quantization for Computer-Generated Multiview Images using Depth Image-Based Rendering

  • Kim, Min-Young;Cho, Yong-Joo;Choo, Hyon-Gon;Kim, Jin-Woong;Park, Kyoung-Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.11
    • /
    • pp.2175-2190
    • /
    • 2011
  • This paper presents the effects of depth map quantization for multiview intermediate image generation using depth image-based rendering (DIBR). DIBR synthesizes multiple virtual views of a 3D scene from a 2D image and its associated depth map. However, it needs precise depth information in order to generate reliable and accurate intermediate view images for use in multiview 3D display systems. Previous work has extensively studied the pre-processing of the depth map, but little is known about depth map quantization. In this paper, we conduct an experiment to estimate the depth map quantization that affords acceptable image quality to generate DIBR-based multiview intermediate images. The experiment uses computer-generated 3D scenes, in which the multiview images captured directly from the scene are compared to the multiview intermediate images constructed by DIBR with a number of quantized depth maps. The results showed that there was no significant effect on depth map quantization from 16-bit to 7-bit (and more specifically 96-scale) on DIBR. Hence, a depth map above 7-bit is needed to maintain sufficient image quality for a DIBR-based multiview 3D system.

A Study on 2D/3D image Conversion Method using Create Depth Map (2D/3D 변환을 위한 깊이정보 생성기법에 관한 연구)

  • Han, Hyeon-Ho;Lee, Gang-Seong;Lee, Sang-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.4
    • /
    • pp.1897-1903
    • /
    • 2011
  • This paper discusses a 2D/3D conversion of images using technologies like object extraction and depth-map creation. The general procedure for converting 2D images into a 3D image is extracting objects from 2D image, recognizing the distance of each points, generating the 3D image and correcting the image to generate with less noise. This paper proposes modified new methods creating a depth-map from 2D image and recognizing the distance of objects in it. Depth-map information which determines the distance of objects is the key data creating a 3D image from 2D images. To get more accurate depth-map data, noise filtering is applied to the optical flow. With the proposed method, better depth-map information is calculated and better 3D image is constructed.

Generation of Stereoscopic Image from 2D Image based on Saliency and Edge Modeling (관심맵과 에지 모델링을 이용한 2D 영상의 3D 변환)

  • Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.20 no.3
    • /
    • pp.368-378
    • /
    • 2015
  • 3D conversion technology has been studied over past decades and integrated to commercial 3D displays and 3DTVs. The 3D conversion plays an important role in the augmented functionality of three-dimensional television (3DTV), because it can easily provide 3D contents. Generally, depth cues extracted from a static image is used for generating a depth map followed by DIBR (Depth Image Based Rendering) rendering for producing a stereoscopic image. However except some particular images, the existence of depth cues is rare so that the consistent quality of a depth map cannot be accordingly guaranteed. Therefore, it is imperative to make a 3D conversion method that produces satisfactory and consistent 3D for diverse video contents. From this viewpoint, this paper proposes a novel method with applicability to general types of image. For this, saliency as well as edge is utilized. To generate a depth map, geometric perspective, affinity model and binomic filter are used. In the experiments, the proposed method was performed on 24 video clips with a variety of contents. From a subjective test for 3D perception and visual fatigue, satisfactory and comfortable viewing of 3D contents was validated.

2D/3D conversion method using depth map based on haze and relative height cue (실안개와 상대적 높이 단서 기반의 깊이 지도를 이용한 2D/3D 변환 기법)

  • Han, Sung-Ho;Kim, Yo-Sup;Lee, Jong-Yong;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.10 no.9
    • /
    • pp.351-356
    • /
    • 2012
  • This paper presents the 2D/3D conversion technique using depth map which is generated based on the haze and relative height cue. In cases that only the conventional haze information is used, errors in image without haze could be generated. To reduce this kind of errors, a new approach is proposed combining the haze information with depth map which is constructed based on the relative height cue. Also the gray scale image from Mean Shift Segmentation is combined with depth map of haze information to sharpen the object's contour lines, upgrading the quality of 3D image. Left and right view images are generated by DIBR(Depth Image Based Rendering) using input image and final depth map. The left and right images are used to generate red-cyan 3D image and the result is verified by measuring PSNR between the depth maps.

Motion Depth Generation Using MHI for 3D Video Conversion (3D 동영상 변환을 위한 MHI 기반 모션 깊이맵 생성)

  • Kim, Won Hoi;Gil, Jong In;Choi, Changyeol;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.22 no.4
    • /
    • pp.429-437
    • /
    • 2017
  • 2D-to-3D conversion technology has been studied over past decades and integrated to commercial 3D displays and 3DTVs. Generally, depth cues extracted from a static image is used for generating a depth map followed by DIBR (Depth Image Based Rendering) for producing a stereoscopic image. Further, motion is also an important cue for depth estimation and is estimated by block-based motion estimation, optical flow and so forth. This papers proposes a new method for motion depth generation using Motion History Image (MHI) and evaluates the feasiblity of the MHI utilization. In the experiments, the proposed method was performed on eight video clips with a variety of motion classes. From a qualitative test on motion depth maps as well as the comparison of the processing time, we validated the feasibility of the proposed method.

3D Depth Estimation by a Single Camera (단일 카메라를 이용한 3D 깊이 추정 방법)

  • Kim, Seunggi;Ko, Young Min;Bae, Chulkyun;Kim, Dae Jin
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.281-291
    • /
    • 2019
  • Depth from defocus estimates the 3D depth by using a phenomenon in which the object in the focal plane of the camera forms a clear image but the object away from the focal plane produces a blurred image. In this paper, algorithms are studied to estimate 3D depth by analyzing the degree of blur of the image taken with a single camera. The optimized object range was obtained by 3D depth estimation derived from depth from defocus using one image of a single camera or two images of different focus of a single camera. For depth estimation using one image, the best performance was achieved using a focal length of 250 mm for both smartphone and DSLR cameras. The depth estimation using two images showed the best 3D depth estimation range when the focal length was set to 150 mm and 250 mm for smartphone camera images and 200 mm and 300 mm for DSLR camera images.

Visual Fatigue Reduction Based on Depth Adjustment for DIBR System

  • Liu, Ran;Tan, Yingchun;Tian, Fengchun;Xie, Hui;Tai, Guoqin;Tan, Weimin;Liu, Junling;Xu, Xiaoyan;Kadri, Chaibou;Abakah, Naana
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.4
    • /
    • pp.1171-1187
    • /
    • 2012
  • A depth adjustment method for visual fatigue reduction for depth-image-based rendering (DIBR) system is proposed. One important aspect of the method is that no calibration parameters are needed for adjustment. By analyzing 3D image warping, the perceived depth is expressed as a function of three adjustable parameters: virtual view number, scale factor and depth value of zero parallax setting (ZPS) plane. Adjusting these three parameters according to the proposed parameter modification algorithm when performing 3D image warping can effectively change the perceived depth of stereo pairs generated in DIBR system. As the depth adjustment is performed in simple 3D image warping equations, the proposed method is facilitative for hardware implementation. Experimental results show that the proposed depth adjustment method provides an improvement in visual comfort of stereo pairs as well as generating comfortable stereoscopic images with different perceived depths that people desire.