• Title/Summary/Keyword: StyleGAN

Search Result 40, Processing Time 0.035 seconds

A study of interior style transformation with GAN model (GAN을 활용한 인테리어 스타일 변환 모델에 관한 연구)

  • Choi, Jun-Hyeck;Lee, Jae-Seung
    • Journal of KIBIM
    • /
    • v.12 no.1
    • /
    • pp.55-61
    • /
    • 2022
  • Recently, demand for designing own space is increasing as the rapid growth of home furnishing market. However, there is a limitation that it is not easy to compare the style between before construction view and after view. This study aims to translate real image into another style with GAN model learned with interior images. To implement this, first we established style criteria and collected modern, natural, and classic style images, and experimented with ResNet, UNet, Gradient penalty concept to CycleGAN algorithm. As a result of training, model recognize common indoor image elements, such as floor, wall, and furniture, and suitable color, material was converted according to interior style. On the other hand, the form of furniture, ornaments, and detailed pattern expressions are difficult to be recognized by CycleGAN model, and the accuracy lacked. Although UNet converted images more radically than ResNet, it was more stained. The GAN algorithm allowed us to represent results within 2 seconds. Through this, it is possible to quickly and easily visualize and compare the front and after the interior space style to be constructed. Furthermore, this GAN will be available to use in the design rendering include interior.

A Study on GAN Algorithm for Restoration of Cultural Property (pagoda)

  • Yoon, Jin-Hyun;Lee, Byong-Kwon;Kim, Byung-Wan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.77-84
    • /
    • 2021
  • Today, the restoration of cultural properties is done by applying the latest IT technology from relying on existing data and experts. However, there are cases where new data are released and the original restoration is incorrect. Also, sometimes it takes too long to restore. And there is a possibility that the results will be different than expected. Therefore, we aim to quickly restore cultural properties using DeepLearning. Recently, so the algorithm DcGAN made in GANs algorithm, and image creation, restoring sectors are constantly evolving. We try to find the optimal GAN algorithm for the restoration of cultural properties among various GAN algorithms. Because the GAN algorithm is used in various fields. In the field of restoring cultural properties, it will show that it can be applied in practice by obtaining meaningful results. As a result of experimenting with the DCGAN and Style GAN algorithms among the GAN algorithms, it was confirmed that the DCGAN algorithm generates a top image with a low resolution.

Med-StyleGAN2: A GAN-Based Synthetic Data Generation for Medical Image Generation (Med-StyleGAN2: 의료 영상 생성을 위한 GAN 기반의 합성 데이터 생성)

  • Jae-Ha Choi;Sung-Yeon Kim;Hae-Rin Byeon;Se-Yeon Lee;Jung-Soo Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.904-905
    • /
    • 2023
  • 본 논문에서는 의료 영상 생성을 위한 Med-StyleGAN2를 제안한다. 생성적 적대 신경망은 이미지 생성에는 효과적이지만, 의료 영상 생성에는 한계점을 가지고 있다. 따라서 본 연구에서는 의료 영상 생성에 특화된 StyleGAN 기반 학습 모델을 제안한다. 이는 다양한 의료 영상 어플리케이션에 활용할 수 있으며, 생성된 의료 영상에 대한 정량적, 정성적 평가를 수행함으로써 의료 영상 생성 분야의 발전 가능성에 대해 연구한다.

A StyleGAN Image Detection Model Based on Convolutional Neural Network (합성곱신경망 기반의 StyleGAN 이미지 탐지모델)

  • Kim, Jiyeon;Hong, Seung-Ah;Kim, Hamin
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.12
    • /
    • pp.1447-1456
    • /
    • 2019
  • As artificial intelligence technology is actively used in image processing, it is possible to generate high-quality fake images based on deep learning. Fake images generated using GAN(Generative Adversarial Network), one of unsupervised learning algorithms, have reached levels that are hard to discriminate from the naked eye. Detecting these fake images is required as they can be abused for crimes such as illegal content production, identity fraud and defamation. In this paper, we develop a deep-learning model based on CNN(Convolutional Neural Network) for the detection of StyleGAN fake images. StyleGAN is one of GAN algorithms and has an excellent performance in generating face images. We experiment with 48 number of experimental scenarios developed by combining parameters of the proposed model. We train and test each scenario with 300,000 number of real and fake face images in order to present a model parameter that improves performance in the detection of fake faces.

Re-Destyle: Exemplar-Based Neural Style Transfer using Improved Facial Destylization (Re-Destyle: 개선된 Facial Destylization 을 활용한 예시 기반 신경망 스타일 전이 연구)

  • Yoo, Joowon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1339-1342
    • /
    • 2022
  • 예술적 스타일 전이는 예술 작품이 지닌 특징을 다른 이미지에 적용하는 이미지 처리의 오랜 화두 중 하나로, 최근에는 StyleGAN 과 같이 미리 학습된 GAN(생성적 적대 신경망)을 통해 제한된 데이터로도 고해상도의 예술적 초상화를 생성하도록 학습하는 연구가 다양한 방면에서 성과를 내고 있다. 본 논문에서는 2 가지 경로의 StyleGAN과 Facial Destylization 을 통해 고해상도의 예시 기반 스타일 전이를 달성한 DualStyleGAN 연구에 대해 소개하고, 기존 연구에서 사용된 Facial Destylization 방법이 지닌 한계점을 분석한 뒤, 이를 개선한 새로운 방법, Re-Destyle을 제안한다. 새로운 Re-Destyle 방법으로 Facial Destylization 을 적용할 경우 학습 시간을 기존 연구의 방법보다 20 배 이상 개선할 수 있으며 그 결과 1000 개 이하의 적은 데이터와 1~2 시간의 추가 학습만으로도 원하는 타겟 초상화 스타일에 대해 1024×1024 수준의 고해상도의 예시 기반 초상화 스타일 전이 및 이미지 생성 모델을 학습할 수 있다.

  • PDF

A Multi-domain Style Transfer by Modified Generator of GAN

  • Lee, Geum-Boon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.7
    • /
    • pp.27-33
    • /
    • 2022
  • In this paper, we propose a novel generator architecture for multi-domain style transfer method not an image to image translation, as a method of generating a styled image by transfering a style to the content image. A latent vector and Gaussian noises are added to the generator of GAN so that a high quality image is generated while considering the characteristics of various data distributions for each domain and preserving the features of the content data. With the generator architecture of the proposed GAN, networks are configured and presented so that the content image can learn the styles for each domain well, and it is applied to the domain composed of images of the four seasons to show the high resolution style transfer results.

A study on age distortion reduction in facial expression image generation using StyleGAN Encoder (StyleGAN Encoder를 활용한 표정 이미지 생성에서의 연령 왜곡 감소에 대한 연구)

  • Hee-Yeol Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.464-471
    • /
    • 2023
  • In this paper, we propose a method to reduce age distortion in facial expression image generation using StyleGAN Encoder. The facial expression image generation process first creates a face image using StyleGAN Encoder, and changes the expression by applying the learned boundary to the latent vector using SVM. However, when learning the boundary of a smiling expression, age distortion occurs due to changes in facial expression. The smile boundary created in SVM learning for smiling expressions includes wrinkles caused by changes in facial expressions as learning elements, and it is determined that age characteristics were also learned. To solve this problem, the proposed method calculates the correlation coefficient between the smile boundary and the age boundary and uses this to introduce a method of adjusting the age boundary at the smile boundary in proportion to the correlation coefficient. To confirm the effectiveness of the proposed method, the results of an experiment using the FFHQ dataset, a publicly available standard face dataset, and measuring the FID score are as follows. In the smile image, compared to the existing method, the FID score of the smile image generated by the ground truth and the proposed method was improved by about 0.46. In addition, compared to the existing method in the smile image, the FID score of the image generated by StyleGAN Encoder and the smile image generated by the proposed method improved by about 1.031. In non-smile images, compared to the existing method, the FID score of the non-smile image generated by the ground truth and the method proposed in this paper was improved by about 2.25. In addition, compared to the existing method in non-smile images, it was confirmed that the FID score of the image generated by StyleGAN Encoder and the non-smile image generated by the proposed method improved by about 1.908. Meanwhile, as a result of estimating the age of each generated facial expression image and measuring the estimated age and MSE of the image generated with StyleGAN Encoder, compared to the existing method, the proposed method has an average age of about 1.5 in smile images and about 1.63 in non-smile images. Performance was improved, proving the effectiveness of the proposed method.

GAN-based Image-to-image Translation using Multi-scale Images (다중 스케일 영상을 이용한 GAN 기반 영상 간 변환 기법)

  • Chung, Soyoung;Chung, Min Gyo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.767-776
    • /
    • 2020
  • GcGAN is a deep learning model to translate styles between images under geometric consistency constraint. However, GcGAN has a disadvantage that it does not properly maintain detailed content of an image, since it preserves the content of the image through limited geometric transformation such as rotation or flip. Therefore, in this study, we propose a new image-to-image translation method, MSGcGAN(Multi-Scale GcGAN), which improves this disadvantage. MSGcGAN, an extended model of GcGAN, performs style translation between images in a direction to reduce semantic distortion of images and maintain detailed content by learning multi-scale images simultaneously and extracting scale-invariant features. The experimental results showed that MSGcGAN was better than GcGAN in both quantitative and qualitative aspects, and it translated the style more naturally while maintaining the overall content of the image.

Frontal Face Generation Algorithm from Multi-view Images Based on Generative Adversarial Network

  • Heo, Young- Jin;Kim, Byung-Gyu;Roy, Partha Pratim
    • Journal of Multimedia Information System
    • /
    • v.8 no.2
    • /
    • pp.85-92
    • /
    • 2021
  • In a face, there is much information of person's identity. Because of this property, various tasks such as expression recognition, identity recognition and deepfake have been actively conducted. Most of them use the exact frontal view of the given face. However, various directions of the face can be observed rather than the exact frontal image in real situation. The profile (side view) lacks information when comparing with the frontal view image. Therefore, if we can generate the frontal face from other directions, we can obtain more information on the given face. In this paper, we propose a combined style model based the conditional generative adversarial network (cGAN) for generating the frontal face from multi-view images that consist of characteristics that not only includes the style around the face (hair and beard) but also detailed areas (eye, nose, and mouth).

Construction of Dynamic Image Animation Network for Style Transformation Using GAN, Keypoint and Local Affine (GAN 및 키포인트와 로컬 아핀 변환을 이용한 스타일 변환 동적인 이미지 애니메이션 네트워크 구축)

  • Jang, Jun-Bo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.497-500
    • /
    • 2022
  • High-quality images and videos are being generated as technologies for deep learning-based image style translation and conversion of static images into dynamic images have developed. However, it takes a lot of time and resources to manually transform images, as well as professional knowledge due to the difficulty of natural image transformation. Therefore, in this paper, we study natural style mixing through a style conversion network using GAN and natural dynamic image generation using the First Order Motion Model network (FOMM).