• Title/Summary/Keyword: interactive rendering

Search Result 100, Processing Time 0.021 seconds

Implementation of Real-time Interactive Ray Tracing on GPU (GPU 기반의 실시간 인터렉티브 광선추적법 구현)

  • Bae, Sung-Min;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.7 no.3
    • /
    • pp.59-66
    • /
    • 2007
  • Ray tracing is one of the classical global illumination methods to generate a photo-realistic rendering image with various lighting effects such as reflection and refraction. However, there are some restrictions on real-time applications because of its computation load. In order to overcome these limitations, many researches of the ray tracing based on GPU (Graphics Processing Unit) have been presented up to now. In this paper, we implement the ray tracing algorithm by J. Purcell and combine it with two methods in order to improve the rendering performance for interactive applications. First, intersection points of the primary ray are determined efficiently using rasterization on graphics hardware. We then construct the acceleration structure of 3D objects to improve the rendering performance. There are few researches on a detail analysis of improved performance by these considerations in ray tracing rendering. We compare the rendering system with environment mapping based on GPU and implement the wireless remote rendering system. This system is useful for interactive applications such as the realtime composition, augmented reality and virtual reality.

  • PDF

3D Rendering of Magnetic Resonance Images using Visualization Toolkit and Microsoft.NET Framework

  • Madusanka, Nuwan;Zaben, Naim Al;Shidaifat, Alaaddin Al;Choi, Heung-Kook
    • Journal of Multimedia Information System
    • /
    • v.2 no.2
    • /
    • pp.207-214
    • /
    • 2015
  • In this paper, we proposed new software for 3D rendering of MR images in the medical domain using C# wrapper of Visualization Toolkit (VTK) and Microsoft .NET framework. Our objective in developing this software was to provide medical image segmentation, 3D rendering and visualization of hippocampus for diagnosis of Alzheimer disease patients using DICOM Images. Such three dimensional visualization can play an important role in the diagnosis of Alzheimer disease. Segmented images can be used to reconstruct the 3D volume of the hippocampus, and it can be used for the feature extraction, measure the surface area and volume of hippocampus to assist the diagnosis process. This software has been designed with interactive user interfaces and graphic kernels based on Microsoft.NET framework to get benefited from C# programming techniques, in particular to design pattern and rapid application development nature, a preliminary interactive window is functioning by invoking C#, and the kernel of VTK is simultaneously embedded in to the window, where the graphics resources are then allocated. Representation of visualization is through an interactive window so that the data could be rendered according to user's preference.

Comic-Book Style Rendering for Game (게임을 위한 코믹북 스타일 렌더링)

  • Kim, Tae-Gyu;Oh, Gyu-Hwan;Lee, Chang-Shin
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.81-92
    • /
    • 2007
  • Nowadays, many computer games based on NPR(Non-photorealistic Rendering) techniques have been developed due to their distinctive visual properties. However, only limited methods of NPR techniques have been exploited in producing computer games and amongst them cartoon-style rendering techniques especially has had the special interest. In the paper, we suggest an effective rendering method of comic-book style that will be applicable to computer game. In order to do, we first characterize the properties of comic-book from comparing two visuals: celluloid animation and comic-book. We then suggest a real-time rendering method of comic-book style represented by outline sketch, tone, and hatching. We finally examine its effectiveness by observing the game developed using the method.

  • PDF

A Study on the Real Time Culling of Infinite Sets of Geometries Using OSP Tree (OSP Tree를 이용한 무한순차 입력 형상의 실시간 컬링에 관한 연구)

  • 표종현;채영호
    • Korean Journal of Computational Design and Engineering
    • /
    • v.8 no.2
    • /
    • pp.75-83
    • /
    • 2003
  • In this paper, OSP(Octal Space Partitioning) tree is proposed for the real time culling of infinite sets of geometries in interactive Virtual Environment applications. And MSVBSP(Modified Shadow Volume BSP) tree is suggested for the occlusion culling. Experimental results show that the OSP and MSVBSP tree are efficiently implemented in real time rendering of interactive geometries.

Efficient Shear-warp Volume Rendering using Spacial Locality of Memory Access (메모리 참조 공간 연관성을 이용한 효율적인 쉬어-왑 분해 볼륨렌더링)

  • 계희원;신영길
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.3_4
    • /
    • pp.187-194
    • /
    • 2004
  • Shear-Warp volume rendering has many advantages such as good image Quality and fast rendering speed. However in the interactive classification environment it has low efficiency of memory access since preprocessed classification is unavailable. In this paper we present an algorithm using the spacial locality of memory access in the interactive classification environment. We propose an extension model appending a rotation matrix to the factorization of viewing transformation, it thus performs a scanline-based rendering in the object and image space. We also show causes and solutions of three problems of the proposed algorithm such as inaccurate front-to-back composition, existence of hole, increasing computational cost. This model is efficient due to the spacial locality of memory access.

Reconstruction of Color-Volume Data for Three-Dimensional Human Anatomic Atlas (3차원 인체 해부도 작성을 위한 칼라 볼륨 데이터의 입체 영상 재구성)

  • 김보형;이철희
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.2
    • /
    • pp.199-210
    • /
    • 1998
  • In this paper, we present a 3D reconstruction method of color volume data for a computerized human atlas. Binary volume rendering which takes the advantages of object-order ray traversal and run-length encoding visualizes 3D organs at an interactive speed in a general PC without the help of specific hardwares. This rendering method improves the rendering speed by simplifying the determination of the pixel value of an intermediate depth image and applying newly developed normal vector calculation method. Moreover, we describe the 3D boundary encoding that reduces the involved data considerably without the penalty of image quality. The interactive speed of the binary rendering and the storage efficiency of 3D boundary encoding will accelerate the development of the PC-based human atlas.

  • PDF

An Efficient Volume Rendering for Dental Diagnosis Using Cone Beam CT data (치과 원추형 CT 영상 데이터 분석에 효율적인 볼륨 렌더링 방법)

  • Koo, Yun Mo
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.1
    • /
    • pp.55-64
    • /
    • 2012
  • The advantage of direct volume rendering is to visualize structures of interest in the volumetric data. However it is still difficult to simultaneously show interior and exterior structures. Recently, cone beam computed tomography(CBCT) has been used for dental diagnosis. Despite of its usefulness, there is a limitation in the detection of interior structures such as pulp and inferior alveolar nerve canal. In this paper, we propose an efficient volume rendering model for visualizing important interior as well as exterior structures of dental CBCT. It is based on the concept of illustrative volume rendering and enhances boundary and silhouette of structures. Moreover, we present a new method that assigns a different color to structures in the rear so as to distinguish the front ones from the rear ones. This proposed rendering model has been implemented on graphics hardware, so that we can achieve interactive performance. In addition, we can render teeth, pulp and canal without cumbersome segmentation step.

Interactive Pixel-unit AR Lip Makeup System Using RGB Camera

  • Nam, Hyeongil;Lee, Jeongeun;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.25 no.7
    • /
    • pp.1042-1051
    • /
    • 2020
  • In this paper, we propose an AR (Augmented Reality) lip makeup using bare hands interactively using an RGB camera. Unlike previous interactive makeup studies, this interactive lip makeup system is based on an RGB camera. Also, the system controls the makeup area in pixels, not in polygon-units. For pixel-unit controlling, the system also proposed a 'Rendering Map' that can store the relative position of the touched hand relative to the lip landmarks. With the map, the part to be changed in color can be specified in the current frame. And the lip color of the corresponding area is adjusted, even if the movement of the face changes in the next frame. Through user experiments, we compare quantitatively and qualitatively our makeup method with the conventional polygon-unit method. Experimental results demonstrate that the proposed method enhances the quality of makeup with a little sacrifice of computational complexity. It is confirmed that natural makeup similar to the actual lip makeup is possible by dividing the lip area into more detailed areas. Furthermore, the method can be applied to make the face makeup of other areas more realistic.

'EVE-SoundTM' Toolkit for Interactive Sound in Virtual Environment (가상환경의 인터랙티브 사운드를 위한 'EVE-SoundTM' 툴킷)

  • Nam, Yang-Hee;Sung, Suk-Jeong
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.273-280
    • /
    • 2007
  • This paper presents a new 3D sound toolkit called $EVE-Sound^{TM}$ that consists of pre-processing tool for environment simplification preserving sound effect and 3D sound API for real-time rendering. It is designed so that it can allow users to interact with complex 3D virtual environments by audio-visual modalities. $EVE-Sound^{TM}$ toolkit would serve two different types of users: high-level programmers who need an easy-to-use sound API for developing realistic 3D audio-visually rendered applications, and the researchers in 3D sound field who need to experiment with or develop new algorithms while not wanting to re-write all the required code from scratch. An interactive virtual environment application is created with the sound engine constructed using $EVE-Sound^{TM}$ toolkit, and it shows the real-time audio-visual rendering performance and the applicability of proposed $EVE-Sound^{TM}$ for building interactive applications with complex 3D environments.

Relighting 3D Scenes with a Continuously Moving Camera

  • Kim, Soon-Hyun;Kyung, Min-Ho;Lee, Joo-Haeng
    • ETRI Journal
    • /
    • v.31 no.4
    • /
    • pp.429-437
    • /
    • 2009
  • This paper proposes a novel technique for 3D scene relighting with interactive viewpoint changes. The proposed technique is based on a deep framebuffer framework for fast relighting computation which adopts image-based techniques to provide arbitrary view-changing. In the preprocessing stage, the shading parameters required for the surface shaders, such as surface color, normal, depth, ambient/diffuse/specular coefficients, and roughness, are cached into multiple deep framebuffers generated by several caching cameras which are created in an automatic manner. When the user designs the lighting setup, the relighting renderer builds a map to connect a screen pixel for the current rendering camera to the corresponding deep framebuffer pixel and then computes illumination at each pixel with the cache values taken from the deep framebuffers. All the relighting computations except the deep framebuffer pre-computation are carried out at interactive rates by the GPU.