DOI QR코드

DOI QR Code

Comparison of Message Passing Interface and Hybrid Programming Models to Solve Pressure Equation in Distributed Memory System

분산 메모리 시스템에서 압력방정식의 해법을 위한 MPI와 Hybrid 병렬 기법의 비교

  • Jeon, Byoung Jin (Dept. of Energy System, Graduate School of Energy and Environment, Seoul Nat'l Univ. of Science and Technology) ;
  • Choi, Hyoung Gwon (Dept. of Mechanical/Automotive Engineering, Seoul Nat'l Univ. of Science and Technology)
  • 전병진 (서울과학기술대학교 에너지환경대학원 에너지시스템공학과) ;
  • 최형권 (서울과학기술대학교 기계자동차공학과)
  • Received : 2014.08.21
  • Accepted : 2014.11.03
  • Published : 2015.02.01

Abstract

The message passing interface (MPI) and hybrid programming models for the parallel computation of a pressure equation were compared in a distributed memory system. Both models were based on domain decomposition, and two numbers of the sub-domain were selected by considering the efficiency of the hybrid model. The parallel performances for various problem sizes were measured using up to 96 threads. It was found that in addition to the cache-memory size, the overhead of the MPI communication/OpenMP directives affected the parallel performance. For small problems, the parallel performance was low because the percentage of the overhead of the MPI communication/OpenMP directives increased as the number of threads increased, and MPI was better than the hybrid model because it had a smaller communication overhead. For large problems, the parallel performance was high because, in addition to the cache effect, the percentage of the communication overhead was relatively low compared to that for small problems, and the hybrid model was better than MPI because the communication overhead of MPI was more dominant than that of the OpenMP directives in the hybrid model.

본 연구에서는 분산 메모리시스템에서의 압력 방정식의 병렬해법을 위하여 MPI(Message Passing Interface)와 하이브리드 병렬기법을 사용하였다. 두 모델은 영역분할 기법을 활용하며, 하이브리드 기법은 성능이 양호한 두 가지 영역분할에 대해 수행하였다. 두 병렬기법의 성능을 비교하기 위해서 다양한 문제 크기에 대해 최대 96개의 쓰레드를 사용하여 속도향상을 측정하였다. 병렬 성능은 캐쉬 메모리에 따른 문제의 크기 및 MPI 통신, OpenMP 지시어의 부하에 대해 영향을 받음을 확인하였다. 문제의 크기가 작은 경우에는 쓰레드가 증가할수록 MPI 통신 및 OpenMP 지시어 부하에 대한 비율이 상대적으로 크기 때문에 병렬 성능이 좋지 않으며, MPI 통신 부하보다는 OpenMP 지시어 부하가 상대적으로 크므로 MPI 병렬 기법의 병렬 성능이 더 우수하다. 문제의 크기가 큰 경우에는 캐쉬 메모리의 활용도가 높고 MPI 통신 및 OpenMP 지시어 부하에 대한 비율이 낮아 병렬 성능이 좋으며, OpenMP 지시어보다 MPI 통신에 의한 부하가 더 지배적이어서 하이브리드 병렬 성능이 MPI 병렬 성능보다 더 양호하다.

Keywords

References

  1. http://mvapich.cse.ohio-state.edu
  2. Kang, S. W., Choi, H. G. and Yoo, J. Y., 2002, "Parallelized Dynamic Large Eddy Simulation of Turbulent Flow Around a Vehicle Model," Proceedings of the KSME 2002 Spring Annual Meeting, pp. 1562-1567.
  3. http://www.open-mpi.org
  4. Rabenseifner, R., 2003, "Hybrid Parallel Programming: Performance Problems and Chances," Proceedings of the 45th CUG Conference 2003, pp. 1-11.
  5. Jeon, B. J., Lee, J. R., Yoon, H. Y. and Choi, H. G., 2014, "Performance Analysis of the Parallel CUPID Code for Various Parallel Programming Models in Symmetric Multi-Processing System," Trans. Korean Soc. Mech. Eng. B., Vol. 38, pp. 71-79.
  6. Robert, G., Bo, K. and Daniel, K., 2010, "A Novel Parallel QR Algorithm for Hybrid Distributed Memory HPC Systems", SIAM J. Sci. Comput, Vol. 32, pp. 2345-2378. https://doi.org/10.1137/090756934
  7. http://www.netlib.org/scalapack/slug/
  8. KAERI, 2013, CUPID code 1.7 manual, Vol. 1, pp. 46-53.
  9. http://software.intel.com/en-us/fortran-compilers/
  10. http://www-users.cs.umm.edu/-karypis/metis