• Title/Summary/Keyword: World-View 2 images

Search Result 41, Processing Time 0.026 seconds

Estimation of the Available Green Roof Area using Geo-Spatial Data (공간정보를 이용한 옥상녹화 가용면적 추정)

  • Ahn, Ji-Yeon;Jung, Tae-Woong;Koo, Jee-hee
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.19 no.5
    • /
    • pp.11-17
    • /
    • 2016
  • The purposes of this research are to estimate area of greenable roof and to monitor maintaining of green roofs using World-View 2 images. The contents of this research are development of World-View 2 application technologies for estimation of green roof area and development of monitoring and maintaining of green roofs using World-View 2 images. The available green roof areas in Gwangjin-gu Seoul, a case for this study, were estimated using digital maps and World-View 2 images. The available green roof area is approximately 12.17% ($2,153,700m^2$) of the total area, and the roof vegetation accounts for 0.46% ($80,660m^2$) of the total area. For verification of the extracted roof vegetation, Vworld 3D Desktop map service was applied. The study results may be used as a decision-making tool by the government and local governments in determining the feasibility of green roof projects. In addition, the project implementer may periodically monitor to see whether roof greening has maintained for efficient management of projects, and a vast amount of World-View 2 images may be regularly used before and after the projects to contribute to sharing of satellite images information.

Layered Depth Image Representation And H.264 Encoding of Multi-view video For Free viewpoint TV (자유시점 TV를 위한 다시점 비디오의 계층적 깊이 영상 표현과 H.264 부호화)

  • Shin, Jong Hong
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.7 no.2
    • /
    • pp.91-100
    • /
    • 2011
  • Free viewpoint TV can provide multi-angle view point images for viewer needs. In the real world, But all angle view point images can not be captured by camera. Only a few any angle view point images are captured by each camera. Group of the captured images is called multi-view image. Therefore free viewpoint TV wants to production of virtual sub angle view point images form captured any angle view point images. Interpolation methods are known of this problem general solution. To product interpolated view point image of correct angle need to depth image of multi-view image. Unfortunately, multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, confirmed high compression performance and good quality reconstructed image.

WorldView-2 pan-sharpening by minimization of spectral distortion with least squares

  • Choi, Myung-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.3
    • /
    • pp.353-357
    • /
    • 2011
  • Although the intensity-hue-saturation (IHS) method for pan-sharpening has a spectral distortion problem, it is a popular method in the remote sensing community and has been used as a standard procedure in many commercial packages due to its fast computing and easy implementation. Recently, IHS-like approaches have tried to overcome the spectral distortion problem inherited from the IHS method itself and yielded a good result. In this paper, a similar IHS-like method with least squares for WorldView-2 pan-sharpening is presented. In particular, unlike the previous methods with three or four-band multispectral images for pan-sharpening, six bands of WorldView-2 multispectral image located within the range of panchromatic spectral radiance responses are considered in order to reduce the spectral distortion during the merging process. As a result, the new approach provides a satisfactory result, both visually and quantitatively. Furthermore, this shows great value in spectral fidelity of WorldView-2 eight-band multispectral imagery.

Accuracy Comparison of TOA and TOC Reflectance Products of KOMPSAT-3, WorldView-2 and Pléiades-1A Image Sets Using RadCalNet BTCN and BSCN Data

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.1
    • /
    • pp.21-32
    • /
    • 2022
  • The importance of the classical theme of how the Top-of-Atmosphere (TOA) and Top-of-Canopy (TOC) reflectance of high-resolution satellite images match the actual atmospheric reflectance and surface reflectance has been emphasized. Based on the Radiometric Calibration Network (RadCalNet) BTCN and BSCN data, this study compared the accuracy of TOA and TOC reflectance products of the currently available optical satellites, including KOMPSAT-3, WorldView-2, and Pléiades-1A image sets calculated using the absolute atmospheric correction function of the Orfeo Toolbox (OTB) tool. The comparison experiment used data in 2018 and 2019, and the Landsat-8 image sets from the same period were applied together. The experiment results showed that the product of TOA and TOC reflectance obtained from the three sets of images were highly consistent with RadCalNet data. It implies that any imagery may be applied when high-resolution reflectance products are required for a certain application. Meanwhile, the processed results of the OTB tool and those by the Apparent Reflection method of another tool for WorldView-2 images were nearly identical. However, in some cases, the reflectance products of Landsat-8 images provided by USGS sometimes showed relatively low consistency than those computed by the OTB tool, with the reference of RadCalNet BTCN and BSCN data. Continuous experiments on active vegetation areas in addition to the RadCalNet sites are necessary to obtain generalized results.

Accuracy Investigation of RPC-based Block Adjustment Using High Resolution Satellite Images GeoEye-1 and WorldView-2 (고해상도 위성영상 GeoEye-1과 WorldView-2의 RPC 블록조정모델 정확도 분석)

  • Choi, Sun-Yong;Kang, Jun-Mook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.2
    • /
    • pp.107-116
    • /
    • 2012
  • We investigated the accuracy in three dimensional geo-positioning derived by four high resolution satellite images acquired by two different sensors using the vendor-provided rational polynomial coefficients(RPC) based block adjustment in this research. We used two in-track stereo pairs of GeoEye-1 and WorldView-2 satellite and DGPS surveying data. In this experiment, we analyzed accuracies of RPC block adjustment models of two kinds of homogeneous stereo pairs, four kinds of heterogeneous stereo pairs, three 3 triplet image pairs, and one quadruplet image pair separately. The result shows that the accuracies of the models are nearly same. The accuracy without any GCPs reaches about CEP(90) 2.3m and LEP(90) 2.5m and the accuracy with single GCP is about CEP(90) 0.3m and LEP(90) 0.5m.

COSMO-SkyMed 2 Image Color Mapping Using Random Forest Regression

  • Seo, Dae Kyo;Kim, Yong Hyun;Eo, Yang Dam;Park, Wan Yong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.4
    • /
    • pp.319-326
    • /
    • 2017
  • SAR (Synthetic aperture radar) images are less affected by the weather compared to optical images and can be obtained at any time of the day. Therefore, SAR images are being actively utilized for military applications and natural disasters. However, because SAR data are in grayscale, it is difficult to perform visual analysis and to decipher details. In this study, we propose a color mapping method using RF (random forest) regression for enhancing the visual decipherability of SAR images. COSMO-SkyMed 2 and WorldView-3 images were obtained for the same area and RF regression was used to establish color configurations for performing color mapping. The results were compared with image fusion, a traditional color mapping method. The UIQI (universal image quality index), the SSIM (structural similarity) index, and CC (correlation coefficients) were used to evaluate the image quality. The color-mapped image based on the RF regression had a significantly higher quality than the images derived from the other methods. From the experimental result, the use of color mapping based on the RF regression for SAR images was confirmed.

Comparative Analysis of Image Fusion Methods According to Spectral Responses of High-Resolution Optical Sensors (고해상 광학센서의 스펙트럼 응답에 따른 영상융합 기법 비교분석)

  • Lee, Ha-Seong;Oh, Kwan-Young;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.2
    • /
    • pp.227-239
    • /
    • 2014
  • This study aims to evaluate performance of various image fusion methods based on the spectral responses of high-resolution optical satellite sensors such as KOMPSAT-2, QuickBird and WorldView-2. The image fusion methods used in this study are GIHS, GIHSA, GS1 and AIHS. A quality evaluation of each image fusion method was performed with both quantitative and visual analysis. The quantitative analysis was carried out using spectral angle mapper index (SAM), relative global dimensional error (spectral ERGAS) and image quality index (Q4). The results indicates that the GIHSA method is slightly better than other methods for KOMPSAT-2 images. On the other hand, the GS1 method is suitable for Quickbird and WorldView-2 images.

A Design and Implementation of Direct Volume Rendering View Program based on Web (웹 기반의 다이렉트 볼륨 렌더링 View 프로그램의 설계 및 구현)

  • Yoon, Yo-Sup;Yoon, Ga-Rim;Kim, Young-Bong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2004.11a
    • /
    • pp.402-407
    • /
    • 2004
  • Since the world wide web, simple and convenient tool, has proposed, the Internet became the most simple network resource which provide many informations of the world. Furthermore, various methodologies are developed to support the dynamic service such as 3D View web service. We will propose the volume rendering view program that interactively visualize the 3D data on the web. The 3D Data is obtained by stacking the 2D images along the z-direction. We also employ the COM based OCX control which is a kind of Active component. This web program will contribute the diagnosis of the diseases through the 3D visualization and image analysis functions at remote places.

  • PDF

Analysis of Change Detection Results by UNet++ Models According to the Characteristics of Loss Function (손실함수의 특성에 따른 UNet++ 모델에 의한 변화탐지 결과 분석)

  • Jeong, Mila;Choi, Hoseong;Choi, Jaewan
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_2
    • /
    • pp.929-937
    • /
    • 2020
  • In this manuscript, the UNet++ model, which is one of the representative deep learning techniques for semantic segmentation, was used to detect changes in temporal satellite images. To analyze the learning results according to various loss functions, we evaluated the change detection results using trained UNet++ models by binary cross entropy and the Jaccard coefficient. In addition, the learning results of the deep learning model were analyzed compared to existing pixel-based change detection algorithms by using WorldView-3 images. In the experiment, it was confirmed that the performance of the deep learning model could be determined depending on the characteristics of the loss function, but it showed better results compared to the existing techniques.

Convenient View Calibration of Multiple RGB-D Cameras Using a Spherical Object (구형 물체를 이용한 다중 RGB-D 카메라의 간편한 시점보정)

  • Park, Soon-Yong;Choi, Sung-In
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.8
    • /
    • pp.309-314
    • /
    • 2014
  • To generate a complete 3D model from depth images of multiple RGB-D cameras, it is necessary to find 3D transformations between RGB-D cameras. This paper proposes a convenient view calibration technique using a spherical object. Conventional view calibration methods use either planar checkerboards or 3D objects with coded-pattern. In these conventional methods, detection and matching of pattern features and codes takes a significant time. In this paper, we propose a convenient view calibration method using both 3D depth and 2D texture images of a spherical object simultaneously. First, while moving the spherical object freely in the modeling space, depth and texture images of the object are acquired from all RGB-D camera simultaneously. Then, the external parameters of each RGB-D camera is calibrated so that the coordinates of the sphere center coincide in the world coordinate system.