DOI QR코드

DOI QR Code

GAN System Using Noise for Image Generation

이미지 생성을 위해 노이즈를 이용한 GAN 시스템

  • Bae, Sangjung (Department of Computer Engineering, Paichai University) ;
  • Kim, Mingyu (Department of Computer Engineering, Paichai University) ;
  • Jung, Hoekyung (Department of Computer Engineering, Paichai University)
  • Received : 2020.03.16
  • Accepted : 2020.04.14
  • Published : 2020.06.30

Abstract

Generative adversarial networks are methods of generating images by opposing two neural networks. When generating the image, randomly generated noise is rearranged to generate the image. The image generated by this method is not generated well depending on the noise, and it is difficult to generate a proper image when the number of pixels of the image is small In addition, the speed and size of data accumulation in data classification increases, and there are many difficulties in labeling them. In this paper, to solve this problem, we propose a technique to generate noise based on random noise using real data. Since the proposed system generates an image based on the existing image, it is confirmed that it is possible to generate a more natural image, and if it is used for learning, it shows a higher hit rate than the existing method using the hostile neural network respectively.

생성적 적대 신경망(GAN, Generative Adversarial Network)은 두 개의 신경망을 대립하여 이미지를 생성하는 방법이다. 이미지를 생성할 때 랜덤으로 생성한 노이즈를 재배열하여 이미지를 생성하는데 이러한 방법으로 생성된 이미지는 노이즈에 따라 생성이 잘 이루어지지 않고, 이미지의 픽셀이 적은 경우 제대로 된 이미지를 생성하기 어렵다는 문제점이 발생할 수 있다. 또한 데이터 분류에서 데이터가 쌓이는 속도와 크기가 증가되는데 이들을 라벨링하는 데는 많은 어려움이 있다. 본 논문에서는 이를 해결하기 위해 랜덤으로 생성하던 노이즈에 실제 데이터를 사용하여 노이즈를 생성하고 이를 기반으로 이미지를 생성하는 기법을 제안한다. 제안하는 시스템은 기존에 있는 이미지를 기반으로 하는 이미지를 생성하는 것이므로 좀 더 자연스러운 이미지의 생성이 가능하다는 것을 확인하였고 이를 학습에 이용할 경우 기존의 생성적 적대 신경망을 사용한 방법보다 더 높은 적중률을 보임을 확인하였다.

Keywords

References

  1. K. T. Kim, W. Lee, E. U. Cha, M. Y. Sin, and J. U. Kim, "The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce," Journal of Korea Intelligent Information Systems Society, vol. 24, 1-23, 2018.
  2. I. H. Lee, "Outage Analysis and Power Allocation for Distributed Space-Time Coding-Based Cooperative Systems over Rayleigh Fading Channels," Journal of Information and Communication Convergence Engineering, vol. 15, no. 1, pp. 21-27, Mar. 2017. https://doi.org/10.6109/jicce.2017.15.1.21
  3. A. Suryadibrata, and K.B. Kim, "Ganglion Cyst Region Extraction from Ultrasound Images Using Possibilistic C-Means Clustering Method," Journal of Information and Communication Convergence Engineering, vol. 15, no. 1, pp. 49-52, Mar. 2017. https://doi.org/10.6109/jicce.2017.15.1.49
  4. D. Kim, J. G. Joung, K.A.Sohn, and H. Shin, "Knowledge boosting:A graph-based integration approach with multi-omics data and genomic knowledge for cancer clinical outcome prediction," Journal of the American Medical Informatics Association, vol. 22, no. 1, pp.109-120, 2015. https://doi.org/10.1136/amiajnl-2013-002481
  5. Y. J. Jeon and Y. W. Cho, "An Implementation of Othello Game Player Using ANN based Records Learning and Minimax Search Algorithm," Journal of the Korean Institute of Electrical Engineers, vol. 67, no. 12, pp. 1657-1664, 2018.
  6. A. Radford, L. Metz, and S. Chintala, "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks," Cornell University, 2016.
  7. Y. Han and H. J. Kim, "Face Morphing Using Generative Adversarial Networks," Journal of Digital Contents Society, vol. 19, no. 3, pp. 435-443, 2018.
  8. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, and D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, "Generative Adversarial Networks," Cornell University, 2016.
  9. M. S. Ko, H. K. Roh, and K. H. Lee, "GANMOOK: Generative adversarial network to stylize images like ink wash painting," in Proceedings of the Korea Computer Congress, pp. 793-795, 2017.
  10. L. C. Yang, S. Y. Chou, Y. H. Yang, "MidiNet:A convolutional generative adversarial network for symbolic domain music generation," in Proceeding sof the 18th International Society of Music Information Retrieval Conference, pp. 324-331, 2018.