• Title/Summary/Keyword: reconstructed 3-D image

Search Result 326, Processing Time 0.029 seconds

Extraction of location of 3-D object from CIIR method based on blur effect of reconstructed POI

  • Park, Seok-Chan;Kim, Seung-Cheol;Kim, Eun-Soo
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.1363-1366
    • /
    • 2009
  • A new recognition method is used to find the three-dimensional target object on integral imaging. For finding the location of a target image, amount of reconstructed reference image is needed. This method is giving accurate information of target image by correlated among reconstructed target images and reference images.

  • PDF

The Utility Evaluation of Reconstructed 3-D Images by Maximum Intensity Projection in Magnetic Resonance Mammography and Cholangiopancreatography

  • Cho, Jae-Hwan;Lee, Hae-Kag;Park, Cheol-Soo;Kim, Ham-Gyum;Baek, Jong-Geun;Kim, Eng-Chan
    • Journal of Magnetics
    • /
    • v.19 no.4
    • /
    • pp.365-371
    • /
    • 2014
  • The aim of this study was to evaluate the utility of 3-D images by comparing and analyzing reconstructed 3-D images from fast spin echo images of MRI cholangiopancreatography (MRCP) images using maximum intensity projection (MIP) with the subtraction images derived from dynamic tests of magnetic resonance mammography. The study targeted 20 patients histologically diagnosed with pancreaticobiliary duct disease and 20 patients showing pancreaticobiliary duct diseases, where dynamic breast MR (magnetic resonance) images, fast spin echo imaged of pancreaticobiliary duct, and 3-D reconstitution images using a 1.5T MR scanner and 3.0T MR scanner were taken. As a result of the study, the signal-to-noise ratio in the subtracted breast image before and after administering the contrast agent and in the reconstructed 3-D breast image showed a high ratio in the reconstructed image of lesional tissue, relevant tissue, and fat tissue. However, no statistically meaningful differences were found in the contrast-to-noise ratio of the two images. In the case of the MRCP image, no differences were found in the ratios of the fast spin echo image and reconstructed 3-D image.

Resolution-enhanced Reconstruction of 3D Object Using Depth-reversed Elemental Images for Partially Occluded Object Recognitionz

  • Wei, Tan-Chun;Shin, Dong-Hak;Lee, Byung-Gook
    • Journal of the Optical Society of Korea
    • /
    • v.13 no.1
    • /
    • pp.139-145
    • /
    • 2009
  • Computational integral imaging (CII) is a new method for 3D imaging and visualization. However, it suffers from seriously poor image quality of the reconstructed image as the reconstructed image plane increases. In this paper, to overcome this problem, we propose a CII method based on a smart pixel mapping (SPM) technique for partially occluded 3D object recognition, in which the object to be recognized is located at far distance from the lenslet array. In the SPM-based CII, the use of SPM moves a far 3D object toward the near lenslet array and then improves the image quality of the reconstructed image. To show the usefulness of the proposed method, we carry out some experiments for occluded objects and present the experimental results.

Difference in glenoid retroversion between two-dimensional axial computed tomography and three-dimensional reconstructed images

  • Kim, Hyungsuk;Yoo, Chang Hyun;Park, Soo Bin;Song, Hyun Seok
    • Clinics in Shoulder and Elbow
    • /
    • v.23 no.2
    • /
    • pp.71-79
    • /
    • 2020
  • Background: The glenoid version of the shoulder joint correlates with the stability of the glenohumeral joint and the clinical results of total shoulder arthroplasty. We sought to analyze and compare the glenoid version measured by traditional axial two-dimensional (2D) computed tomography (CT) and three-dimensional (3D) reconstructed images at different levels. Methods: A total of 30 cases, including 15 male and 15 female patients, who underwent 3D shoulder CT imaging was randomly selected and matched by sex consecutively at one hospital. The angular difference between the scapular body axis and 2D CT slice axis was measured. The glenoid version was assessed at three levels (midpoint, upper one-third, and center of the lower circle of the glenoid) using Friedman's method in the axial plane with 2D CT images and at the same level of three different transverse planes using a 3D reconstructed image. Results: The mean difference between the scapular body axis on the 3D reconstructed image and the 2D CT slice axis was 38.4°. At the level of the midpoint of the glenoid, the measurements were 1.7°±4.9° on the 2D CT images and -1.8°±4.1° in the 3D reconstructed image. At the level of the center of the lower circle, the measurements were 2.7°±5.2° on the 2D CT images and -0.5°±4.8° in the 3D reconstructed image. A statistically significant difference was found between the 2D CT and 3D reconstructed images at all three levels. Conclusions: The glenoid version is measured differently between axial 2D CT and 3D reconstructed images at three levels. Use of 3D reconstructed imaging can provide a more accurate glenoid version profile relative to 2D CT. The glenoid version is measured differently at different levels.

Comparison of personal computer with CT workstation in the evaluation of 3-dimensional CT image of the skull (전산화단층촬영 단말장치와 개인용 컴퓨터에서 재구성한 두부 3차원 전산화단층영상의 비교)

  • Kang Bok-Hee;Kim Kee-Deog;Park Chang-Seo
    • Imaging Science in Dentistry
    • /
    • v.31 no.1
    • /
    • pp.1-7
    • /
    • 2001
  • Purpose : To evaluate the usefulness of the reconstructed 3-dimensional image on the personal computer in comparison with that of the CT workstation by quantitative comparison and analysis. Materials and Methods : The spiral CT data obtained from 27 persons were transferred from the CT workstation to a personal computer, and they were reconstructed as 3-dimensional image on the personal computer using V-works 2.0/sup TM/. One observer obtained the 14 measurements on the reconstructed 3-dimensional image on both the CT workstation and the personal computer. Paired Nest was used to evaluate the intraobserver difference and the mean value of the each measurement on the CT workstation and the personal computer. Pearson correlation analysis and % incongruence were also performed. Results: I-Gn, N-Gn, N-A, N-Ns, B-A, and G-Op did not show any statistically significant difference (p>0.05), B-O, B-N, Eu-Eu, Zy-Zy, Biw, D-D, Orbrd R, and L had statistically significant difference (p<0.05), but the mean values of the differences of all measurements were below 2 mm, except for D-D. The value of correlation coefficient y was greater than 0.95 at I-Gn, N-Gn, N-A, N-Ns, B-A, B-N, G-Op, Eu-Eu, Zy-Zy, and Biw, and it was 0.75 at B-O, 0.78 at D-D, and 0.82 at both Orbrd Rand L. The % incongruence was below 4% at I-Gn, N-Gn, N-A, N-Ns, B-A, B-N, G-Op, Eu-Eu, Zy-Zy, and Biw, and 7.18%, 10.78%, 4.97%, 5.89% at B-O, D-D, Orbrd Rand L respectively. Conclusion : It can be considered that the utilization of the personal computer has great usefulness in reconstruction of the 3-dimensional image when it comes to the economics, accessibility and convenience, except for thin bones and the landmarks which are difficult to be located.

  • PDF

Statistical Analysis of 3D Volume of Red Blood Cells with Different Shapes via Digital Holographic Microscopy

  • Yi, Faliu;Lee, Chung-Ghiu;Moon, In-Kyu
    • Journal of the Optical Society of Korea
    • /
    • v.16 no.2
    • /
    • pp.115-120
    • /
    • 2012
  • In this paper, we present a method to automatically quantify the three-dimensional (3D) volume of red blood cells (RBCs) using off-axis digital holographic microscopy. The RBCs digital holograms are recorded via a CCD camera using an off-axis interferometry setup. The RBCs' phase image is reconstructed from the recorded off-axis digital hologram by a computational reconstruction algorithm. The watershed segmentation algorithm is applied to the reconstructed phase image to remove background parts and obtain clear targets in the phase image with many single RBCs. After segmenting the reconstructed RBCs' phase image, all single RBCs are extracted, and the 3D volume of each single RBC is then measured with the surface area and the phase values of the corresponding RBC. In order to demonstrate the feasibility of the proposed method to automatically calculate the 3D volume of RBC, two typical shapes of RBCs, i.e., stomatocyte/discocyte, are tested via experiments. Statistical distributions of 3D volume for each class of RBC are generated by using our algorithm. Statistical hypothesis testing is conducted to investigate the difference between the statistical distributions for the two typical shapes of RBCs. Our experimental results illustrate that our study opens the possibility of automated quantitative analysis of 3D volume in various types of RBCs.

High-quality Texture Extraction for Point Clouds Reconstructed from RGB-D Images (RGB-D 영상으로 복원한 점 집합을 위한 고화질 텍스쳐 추출)

  • Seo, Woong;Park, Sang Uk;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.3
    • /
    • pp.61-71
    • /
    • 2018
  • When triangular meshes are generated from the point clouds in global space reconstructed through camera pose estimation against captured RGB-D streams, the quality of the resulting meshes improves as more triangles are hired. However, for 3D reconstructed models beyond some size threshold, they become to suffer from the ugly-looking artefacts due to the insufficient precision of RGB-D sensors as well as significant burdens in memory requirement and rendering cost. In this paper, for the generation of 3D models appropriate for real-time applications, we propose an effective technique that extracts high-quality textures for moderate-sized meshes from the captured colors associated with the reconstructed point sets. In particular, we show that via a simple method based on the mapping between the 3D global space resulting from the camera pose estimation and the 2D texture space, textures can be generated effectively for the 3D models reconstructed from captured RGB-D image streams.

3D Image Conversion of 2D Still Image based-on Differential Area-Moving Scheme (차등적 영역 이동기법을 이용한 2차원 정지영상의 3차원 입체영상 변환)

  • 이종호;김은수
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.11A
    • /
    • pp.1938-1945
    • /
    • 2001
  • In this paper, a new scheme for image conversion of the 2D input images into the stereoscopic 3D images by using differential shifting method is proposed. First, the relative depth information is estimated by disparity and occlusion information from the input stereo images and then, each of image objects are segmented by gray-level using the estimated information. Finally, through the differential shifting of the segmented objects according to the horizontal parallax, a stereoscopic 3D image having optimal stereopsis is reconstructed. From some experimental results, it is found that the horizontal disparity can be improved about 1.6dB in PSNR for the reconstructed stereo image using the proposed scheme through comparing to that of the given input image. In the experiment of using the commercial stereo viewer, the reconstructed stereoscopic 3D images, in which each of the segmented objects are horizontally shifted in the range of 4 ∼5 pixels are also found to have the mast improved stereopsis.

  • PDF

3D Reconstructed Image of Neck Mass to Improve Patient's Understanding (경부 종물 환자의 이해도 개선을 위한 3차원 재건 영상의 활용)

  • Yoo, Young-Sam
    • Korean Journal of Head & Neck Oncology
    • /
    • v.26 no.2
    • /
    • pp.193-197
    • /
    • 2010
  • Objectives : Patients with neck tumor and their family need every information about the disease. Especially, the size and location are confusing with verbal information. With the aid of CT, the problem had some answer, but it needs some medical education. We would like to know the usefullness of 3D reconstructed images in patient education about the disease. Material and Methods : Neck CT data were collected from 10 patients with various neck tumors and converted to 3D reconstructed images. Understanding of the patients about the size and location of tumors were rated from questionaires using axial CT images and 3D images. Results : Understanding score about 3D images were greater than that of CT images(p<0.006). Conclusion : 3D reconstructed images of CT could give the patients more real visual information about the disease.

Comparisons of Object Recognition Performance with 3D Photon Counting & Gray Scale Images

  • Lee, Chung-Ghiu;Moon, In-Kyu
    • Journal of the Optical Society of Korea
    • /
    • v.14 no.4
    • /
    • pp.388-394
    • /
    • 2010
  • In this paper the object recognition performance of a photon counting integral imaging system is quantitatively compared with that of a conventional gray scale imaging system. For 3D imaging of objects with a small number of photons, the elemental image set of a 3D scene is obtained using the integral imaging set up. We assume that the elemental image detection follows a Poisson distribution. Computational geometrical ray back propagation algorithm and parametric maximum likelihood estimator are applied to the photon counting elemental image set in order to reconstruct the original 3D scene. To evaluate the photon counting object recognition performance, the normalized correlation peaks between the reconstructed 3D scenes are calculated for the varied and fixed total number of photons in the reconstructed sectional image changing the total number of image channels in the integral imaging system. It is quantitatively illustrated that the recognition performance of the photon counting integral imaging system can be similar to that of a conventional gray scale imaging system as the number of image viewing channels in the photon counting integral imaging (PCII) system is increased up to the threshold point. Also, we present experiments to find the threshold point on the total number of image channels in the PCII system which can guarantee a comparable recognition performance with a gray scale imaging system. To the best of our knowledge, this is the first report on comparisons of object recognition performance with 3D photon counting & gray scale images.