DOI QR코드

DOI QR Code

Simulation and Colorization between Gray-scale Images and Satellite SAR Images Using GAN

GAN을 이용한 흑백영상과 위성 SAR 영상간의 모의 및 컬러화

  • 조수민 ( 건국대학교 대학원 기술융합공학과) ;
  • 허준혁 (건국대학교 사회환경공학부) ;
  • 어양담 (건국대학교 사회환경공학부)
  • Received : 2023.12.07
  • Accepted : 2023.12.10
  • Published : 2024.02.01

Abstract

Optical satellite images are being used for national security and collection of information, and their utilization is increasing. However, it acquires low-quality images that are not suitable for the user's requirement due to weather conditions and time constraints. In this paper, a deep learning-based conversion of image and colorization model referring to high-resolution SAR images was created to simulate the occluded area with clouds of optical satellite images. The model was experimented according to the type of algorithm applied and input data, and each simulated images was compared and analyzed. In particular, the amount of pixel value information between the input black-and-white image and the SAR image was similarly constructed to overcome the problem caused by the relatively lack of color information. As a result of the experiment, the histogram distribution of the simulated image learned with the Gray-scale image and the high-resolution SAR image was relatively similar to the original image. In addition, the RMSE value was about 6.9827 and the PSNR value was about 31.3960 calculated for quantitative analysis.

광학 위성영상은 국가 보안 및 정보 획득을 목적으로 사용되며 그 활용성은 증가하고 있다. 그러나, 기상 조건 및 시간의 제약으로 사용자의 요구에 적합하지 않은 저품질의 영상을 획득하게 된다. 본 논문에서는 광학 위성영상의 구름 폐색영역을 모의하기 위하여 고해상도 SAR 영상을 참조한 딥러닝 기반의 영상변환 및 컬러화 모델을 생성하였다. 해당 모델은 적용 알고리즘 및 입력 데이터 형태에 따라 실험하였으며 생성된 모의영상을 비교 분석하였다. 특히 입력하는 흑백영상과 SAR 영상간의 화소값 정보량이 유사하도록 하여 상대적으로 색상정보량 부족에서 오는 문제점을 개선하였다. 실험 결과, Gray-scale 영상과 고해상도 SAR 영상으로 학습한 모의영상의 히스토그램 분포가 비교적 원 영상과 유사하였고, 정량적인 분석을 위하여 산정한 RMSE 값은 약 6.9827, PSNR 값은 약 31.3960으로 나타났다.

Keywords

Acknowledgement

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019R1A2C1085618).

References

  1. Darbaghshahi, F. N., Mohammadi, M. R. and Soryani, M. (2022). "Cloud removal in remote sensing images using generative adversarial networks and SAR-to-optical image translation." IEEE Transactions on Geoscience and Remote Sensing, IEEE, Vol. 60, pp. 1-9, https://doi.org/10.1109/TGRS.2021.3131035.
  2. Isola, P., Zhu, J. Y., Zhou, T. and Efros, A. A. (2016). "Image-to-image translation with conditional adversarial networks." arXiv:1611.07004, https://doi.org/10.48550/arXiv.1611.07004. 
  3. Ji, G., Wang, Z., Zhou, L., Xia, Y., Zhong, S. and Gong, S. (2021). "SAR image colorization using multidomain cycle-consistency generative adversarial network." IEEE Geoscience and Remote Sensing Letters, IEEE, Vol. 18, No. 2, pp. 296-300, https://doi.org/10.1109/LGRS.2020.2969891. 
  4. Ku, W. and Chung, D. (2018). "The method for colorizing SAR images of Kompsat-5 using cycle GAN with multi-scale discriminators." Korean Journal of Remote Sensing, KSRS, Vol. 34, No. 6, pp. 1415-1425, https://doi.org/10.7780/kjrs.2018.34.6.3.8 (in Korean). 
  5. Lee, S. Y. and Chung, D. W. (2022). "Labeling dataset based colorization of SAR images using cycle GAN." The Journal of Korean Institute of Electromagnetic Engineering and Science, KIEES, Vol. 33, No. 10, pp. 776-783, https://doi.org/10.5515/KJKIEES.2022.33.10.776 (in Korean). 
  6. Lee, J. H., Kim, K. and Kim, J. H. (2021). "Design of CycleGAN model for SAR image colorization." Proceedings of 2021 IEEE VTS 17th Asia Pacific Wireless Communications Symposium (APWCS), IEEE, Osaka, Japan, pp. 1-5, https://doi.org/10.1109/APWCS50173.2021.9548749. 
  7. Li, J., Wu, Z., Hu, Z., Zhang, J., Li, M., Mo, L. and Molinier, M. (2020). "Thin cloud removal in optical remote sensing images based on generative adversarial networks and physical model of cloud distortion." ISPRS Journal of Photogrammetry and Remote Sensing, Elsevier, Vol. 166, pp. 373-389, https://doi.org/10.1016/j.isprsjprs.2020.06.021. 
  8. Park, N. W., Park, M. G., Kwak, G. H. and Hong, S. (2023). "Deep learning-based virtual optical image generation and its application to early crop mapping." Applied Sciences, MDPI, Vol. 13, No. 3, 1766, https://doi.org/10.3390/app13031766. 
  9. Seo, D. K., Kim, Y. H., Eo, Y. D., Lee, M. H. and Park, W. Y. (2018). "Fusion of SAR and multispectral images using random forest regression for change detection." International Journal of Geo-Information, MDPI, Vol. 7, No. 10, 401, https://doi.org/10.3390/ijgi7100401. 
  10. Won, T. and Eo, Y. D. (2022). "An experiment on image restoration applying the cycle generative adversarial network to partial occlusion Kompsat-3A image." Korean Journal of Remote Sensing, KSRS, Vol. 38, No. 1, pp. 33-43, https://doi.org/10.7780/kjrs.2022.38.1.3. 
  11. Zhu, J. Y., Park, T., Isola, P. and Efros, A. A. (2017). "Unpaired image-to-image translation using cycle-consistent adversarial networks." arXiv: 1703.10593, https://doi.org/10.48550/arXiv.1703.10593.