• Title, Summary, Keyword: GPU

Search Result 748, Processing Time 0.037 seconds

Analysis of Job Scheduling and the Efficiency for Multi-core Mobile GPU (멀티코어형 모바일 GPU의 작업 분배 및 효율성 분석)

  • Lim, Hyojeong;Han, Donggeon;Kim, Hyungshin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.7
    • /
    • pp.4545-4553
    • /
    • 2014
  • Mobile GPU has led to the rapid development of smart phone graphic technology. Most recent smart phones are equipped with high-performance multi-core GPU. How a multi-core mobile GPU can be utilized efficiently will be a critical issue for improving the smart phone performance. On the other hand, most current research has focused on a single-core mobile GPU; studies of multi-core mobile GPU are rare. In this paper, the job scheduling patterns and the efficiency of multi-core mobile GPU are analyzed. In the profiling result, despite the higher number of GPU cores, the total processing time required for certain graphics applications were increased. In addition, when GPU is processing for 3D games, a substantial amount of overhead is caused by communication between not only the CPU and GPU, but also within the GPUs. These results confirmed that more active research for multi-core mobile GPU should be performed to optimize the present mobile GPUs.

Intelligent Face Recognition and Tracking System to Distribute GPU Resources using CUDA (쿠다를 사용하여 GPU 리소스를 분배하는 지능형 얼굴 인식 및 트래킹 시스템)

  • Kim, Jae-Heong;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.22 no.2
    • /
    • pp.281-288
    • /
    • 2018
  • In this paper, we propose an intelligent face recognition and tracking system that distributes GPU resources using CUDA. The proposed system consists of five steps such as GPU allocation algorithm that distributes GPU resources in optimal state, face area detection and face recognition using deep learning, real time face tracking, and PTZ camera control. The GPU allocation algorithm that distributes multi-GPU resources optimally distributes the GPU resources flexibly according to the activation level of the GPU, unlike the method of allocating the GPU to the thread fixedly. Thus, there is a feature that enables stable and efficient use of multiple GPUs. In order to evaluate the performance of the proposed system, we compared the proposed system with the non - distributed system. As a result, the system which did not allocate the resource showed unstable operation, but the proposed system showed stable resource utilization because it was operated stably. Thus, the utility of the proposed system has been demonstrated.

Analysis of the CPU/GPU Temperature and Energy Efficiency depending on Executed Applications (응용프로그램 실행에 따른 CPU/GPU의 온도 및 컴퓨터 시스템의 에너지 효율성 분석)

  • Choi, Hong-Jun;Kang, Seung-Gu;Kim, Jong-Myon;Kim, Cheol-Hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.5
    • /
    • pp.9-19
    • /
    • 2012
  • As the clock frequency increases, CPU performance improves continuously. However, power and thermal problems in the CPU become more serious as the clock frequency increases. For this reason, utilizing the GPU to reduce the workload of the CPU becomes one of the most popular methods in recent high-performance computer systems. The GPU is a specialized processor originally designed for graphics processing. Recently, the technologies such as CUDA which utilize the GPU resources more easily become popular, leading to the improved performance of the computer system by utilizing the CPU and GPU simultaneously in executing various kinds of applications. In this work, we analyze the temperature and the energy efficiency of the computer system where the CPU and the GPU are utilized simultaneously, to figure out the possible problems in upcoming high-performance computer systems. According to our experimentation results, the temperature of both CPU and GPU increase when the application is executed on the GPU. When the application is executed on the CPU, CPU temperature increases whereas GPU temperature remains unchanged. The computer system shows better energy efficiency by utilizing the GPU compared to the CPU, because the throughput of the GPU is much higher than that of the CPU. However, the temperature of the system tends to be increased more easily when the application is executed on the GPU, because the GPU consumes more power than the CPU.

Implementation of a 3D Graphics Simulator for GP-GPU (GP-GPU 개발을 위한 3차원 그래픽 시뮬레이터 구현)

  • Yeo, Dong-young;Kim, Woo-young;Jung, Hyung-Ki;Lee, Kwang-Yeob
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • /
    • pp.337-340
    • /
    • 2009
  • Since a hardware accelerator for 3D graphics processing GPU(Graphics Processing Unit)'s performance has been improving constantly. This is the efficient way was introduced for complex graphics application, but it is rarely used to utilize 100% resources on GPU. GP-GPU(general-purpose GPU), including operations on the GPU and supporting common operations can be handled by the processor, is noted by depending on the distribution of resources that can be effectively controlled. In this paper, the simulator was implemented that supports virtual environment of GP-GPU and available for program design and debugging. Through this, the co-design development environment support simultaneous design fast and reliable verification that are available to build the interface of three-dimensional graphics display.

  • PDF

Analysis on the Active/Inactive Status of Computational Resources for Improving the Performance of the GPU (GPU 성능 저하 해결을 위한 내부 자원 활용/비활용 상태 분석)

  • Choi, Hongjun;Son, Dongoh;Kim, Jongmyon;Kim, Cheolhong
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.7
    • /
    • pp.1-11
    • /
    • 2015
  • In recent high performance computing system, GPGPU has been widely used to process general-purpose applications as well as graphics applications, since GPU can provide optimized computational resources for massive parallel processing. Unfortunately, GPGPU doesn't exploit computational resources on GPU in executing general-purpose applications fully, because the applications cannot be optimized to GPU architecture. Therefore, we provide GPU research guideline to improve the performance of computing systems using GPGPU. To accomplish this, we analyze the negative factors on GPU performance. In this paper, in order to clearly classify the cause of the negative factors on GPU performance, GPU core status are defined into 5 status: fully active status, partial active status, idle status, memory stall status and GPU core stall status. All status except fully active status cause performance degradation. We evaluate the ratio of each GPU core status depending on the characteristics of benchmarks to find specific reasons which degrade the performance of GPU. According to our simulation results, partial active status, idle status, memory stall status and GPU core stall status are induced by computational resource underutilization problem, low parallelism, high memory requests, and structural hazard, respectively.

Multi GPU Based Image Registration for Cerebrovascular Extraction and Interactive Visualization (뇌혈관 추출과 대화형 가시화를 위한 다중 GPU기반 영상정합)

  • Park, Seong-Jin;Shin, Yeong-Gil
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.6
    • /
    • pp.445-449
    • /
    • 2009
  • In this paper, we propose a computationally efficient multi GPU accelerated image registration technique to correct the motion difference between the pre-contrast CT image and post-contrast CTA image. Our method consists of two steps: multi GPU based image registration and a cerebrovascular visualization. At first, it computes a similarity measure considering the parallelism between both GPUs as well as the parallelism inside GPU for performing the voxel-based registration. Then, it subtracts a CT image transformed by optimal transformation matrix from CTA image, and visualizes the subtracted volume using GPU based volume rendering technique. In this paper, we compare our proposed method with existing methods using 5 pairs of pre-contrast brain CT image and post-contrast brain CTA image in order to prove the superiority of our method in regard to visual quality and computational time. Experimental results show that our method well visualizes a brain vessel, so it well diagnose a vessel disease. Our multi GPU based approach is 11.6 times faster than CPU based approach and 1.4 times faster than single GPU based approach for total processing.

Evaluation of the Data Migration between CPU Memory and GPU Memory for a NVIDIA Pascal GPU Using Unified Memory (통합 메모리를 사용하는 NVIDIA 파스칼 GPU에서의 CPU 메모리와 GPU 메모리 간 데이터 통신 분석)

  • Shin, Philkyue;Hong, Seongsoo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • /
    • pp.7-10
    • /
    • 2018
  • 통합 메모리는 CPU 메모리와 GPU 메모리 간의 데이터 통신을 개발자에게 투명하게 내재적으로 수행하는 소프트웨어 런타임 환경으로 개발자에게 CPU 메모리와 GPU 메모리가 통합된 하나의 메모리로 보이게 해준다. 통합 메모리는 장점에도 불구하고 아직 널리 사용되지 못하고 있는데 그 이유는 내재적으로 수행되는 데이터 통신의 오버헤드가 큰 것으로 알려져 있기 때문이다. 하지만 이 데이터 통신이 구체적으로 어떻게 이루어지고 오버헤드는 어떻게 발생하는지 분석한 연구는 아직 존재하지 않는다. 우리는 NVIDIA 사의 최신 GPU 마이크로아키텍처 중 하나인 파스칼을 사용하는 GPU를 대상으로 하여, 통합 메모리를 사용할 시 데이터 통신이 이루어지는 조건과 GPU 응용의 수행시간에 데이터 통신이 끼치는 영향을 실험을 통해 분석한다. 실험 결과 통합 메모리의 오버헤드는 두 가지 원인 때문에 발생한다. 첫째, 통합 메모리를 사용하면 CPU 또는 GPU가 데이터에 접근할 때마다 이 데이터는 CPU 또는 GPU 메모리로 옮겨지고 옮겨진 데이터는 제거된다. 따라서 재사용할 데이터도 제거되어 추가적인 데이터 통신이 발생하고, 이 데이터 통신의 지연시간은 GPU 응용의 수행시간에 더해진다. 둘째, 통합 메모리를 사용하면 데이터 통신과 커널들이 서로 다른 스트림에 할당되어도 동시에 수행되지 못한다. 따라서 GPU 응용의 수행시간은 동시에 수행되던 데이터 통신과 커널의 수행시간만큼 증가한다.

  • PDF

Implementation of GPU based MPEG-2 Decoder (GPU 기반의 MPEG-2 디코더의 구현)

  • Kim, Kyung-Su;Kim, Hong-Sik;Kim, Cheong-Ghil;Park, Woo-Chan
    • Journal of Digital Contents Society
    • /
    • v.9 no.3
    • /
    • pp.371-377
    • /
    • 2008
  • Recently the performance of GPU is increasing much faster compared to GPU and GPU is used for various application programs. In this paper, MPEG-2 Decoder is implemented based on a GPU programming language, CG. The proposed methodology is to perform block rendering with texture data according to video standard with very high parallelism by using the pipeline of GPU which is a stream processing structure. To reduce the data bandwidth between system memory and GPU, local memory is used for graphic card. According to the experiment, the proposed scheme shows performance improvement by more than 2 times compared to CPU based scheme.

  • PDF

Performance Evaluation of the GPU Architecture Executing Parallel Applications (병렬 응용프로그램 실행 시 GPU 구조에 따른 성능 분석)

  • Choi, Hong-Jun;Kim, Cheol-Hong
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.5
    • /
    • pp.10-21
    • /
    • 2012
  • The role of GPU has evolved from graphics-specific processing to general-purpose processing with the development of unified shader core architecture. Especially, execution methods for general-purpose parallel applications using GPU have been researched intensively, since the parallel hardware architecture can be utilized efficiently when the parallel applications are executed. However, current GPU architecture has limitations in executing general-purpose parallel applications, since the GPU is not specialized for general-purpose computing yet. To improve the GPU performance when general-purpose parallel applications are executed, the GPU architecture should be evolved. In this work, we analyze the GPU performance according to the architecture varying the number of cores and clock frequency. Our simulation results show that the GPU performance improves by up to 125.8% and 16.2% as the number of cores increases and the clock frequency increases, respectively. However, note that the improvement of the GPU performance is saturated even though the number of cores increases and the clock frequency increases continuously, since the data cannot be provided to the GPU due to the limit of memory bandwidth. Consequently, to accomplish high performance effectiveness on GPU, computational resources must be more suitably considered.

Analysis of the Influence of GPU Task Length on the Fairness of Virtual Machines in Direct Path-through based GPU Virtualization Environment (직접 통로 기반 GPU 가상화 환경에서 GPU 연산시간의 길이가 가상머신의 공평성에 미치는 영향 분석)

  • Kang, Jihun;Yu, Heonchang;Gil, Joon-Min
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • /
    • pp.32-35
    • /
    • 2017
  • 직접 통로(Direct Pass-through) 기반 GPU(Graphic Processing Unit) 가상화 기법은 클라우드 환경에서 가상머신에게 GPU 장치의 기능을 지원하기 위한 일반적인 방법 중 하나이다. GPU 장치는 GPGPU 기술을 통해 연산을 가속화 할 수 있기 때문에 클라우드 환경에서도 가상머신에 고성능 연산을 지원하기 위해 많이 사용되고 있다. 하지만 기존 가상머신 스케줄링 기법은 가상머신의 CPU 사용 시간을 기반으로 스케줄링 되며, GPU 자원 사용을 고려하지 않는다. 본 논문에서는 GPU와 CPU 연산을 수행하는 가상머신들이 동시에 실행되는 환경에서 성능 실험을 통해 가상머신의 GPU 연산이 다른 가상머신에게 미치는 성능 영향과 GPU 작업 길이가 다른 가상머신에게 미치는 영향을 분석한다.