DOI QR코드

DOI QR Code

A Study on the Automatic Parallelization Method and Tool Development

  • Received : 2020.05.28
  • Accepted : 2020.06.09
  • Published : 2020.08.31

Abstract

Recently, computer hardware is evolving toward increasing the number of computing cores, not increasing the clock speed. In order to use the performance of parallelized hardware to the maximum, the running program must also be parallelized. However, software developers are accustomed to sequential programs, and in most cases, write programs that operate sequentially. They also have a lot of difficulty designing and developing software in parallel. We propose a method to automatically convert a sequential C/C++ program into a parallelized program, and develop a parallelization tool that supports it. It supports open multiprocessing (OpenMP) and parallel patterns library (PPL) as a parallel framework. Perfect automatic parallelization is difficult due to dynamic features such as pointer operation and polymorphism in C/C++ language. This study focuses on verifying the conditions of parallelization rather than focusing on fully automatic parallelization, and providing advice to developers in detail if parallelization is not possible.

Keywords

References

  1. Y.H. Jung, OpenMP Parallel Programming, Freelec Publishing, Jan. 2011.
  2. NVIDIA, Tesla V100 Specification, https://www.nvidia.com/en-us/data-center/v100
  3. B. Chapman, G. Jost, and R. Pas, "Using OpenMP Portable Shared Memory Parallel Programming," USA, MIT Press, 2007. DOI: http://doi.org/10.5860/choice.46-0930
  4. SILKAN, The Par4All, https://github.com/Par4All/par4a
  5. The PIPS Team, Overview of PIPS, https://pips4u.org
  6. D. Chirag, H.S. Bae, S.J. Min, S.Y. Lee, R. Eigenmann, and S. Midkiff, "Cetus: A Source-to-Source Compiler Infrastructure for Multicores," IEEE Computer, vol. 42, no. 12, pp 36-42, Dec. 2009. DOI: http://doi.org/10.1109/MC.2009.385
  7. M. Ishihara, H. Honda, and M. Sato, "Development and Implementation of an Interactive Parallelization Assistance Tool for OpenMP: iPat/OMP," IEICE - Transactions on Information and Systems, Feb. 2006. DOI: https://doi.org/10.1093/ietisy/e89-d.2.399
  8. D. Quinlan, "ROSE: Compiler support for object-oriented frameworks," Proceedings of Conference on Parallel Compilers, 2000.
  9. C.H. Liao, D.J. Quinlan, J.J. Willcock, and T. Panas, "Semantic-Aware Automatic Parallelization of Modern Applications Using High-Level Abstractions," Journal of Parallel Programming, Oct. 2010. DOI: http://doi.org/10.1007/s10766-010-0139-0
  10. R.P. Wilson, R.S. French, C.S. Wilson, S.P. Amarasinghe, J.M. Anderson, S.W.K. Tjiang, S.W. Liao, C.W. Tseng, M.W. Hall, M.S. Lam, and J.L. Hennessy, "SUIF: An Infrastructure for Research on Parallelizing and Optimizing Compilers," ACM SIGPLAN Notices 29(12):31-37, April 1996. DOI: http://doi.org/10.1145/193209.193217
  11. Y.H. Ko, G.S. Heo, and S.H. Lee, "A Study on Distributed System Construction and Numerical Calculation Using Raspberry Pi," International Journal of Advanced Smart Convergence, vol. 8, no. 4, pp 194-199, 2019. https://doi.org/10.7236/IJASC.2019.8.4.194
  12. OpenMP Architecture Review Board, OpenMP Application Programming Interface version 5.0, OpenMP ARB, Nov. 2018.
  13. K.J. Kim, VC++ parallel programming using PPL, Hanbit Media, Mar. 2014.
  14. S. Schaub and B.A. Malloy, "Comprehensive analysis of C++ applications using the libClang API," 23rd International Conference on Software Engineering and Data Engineering, pp. 131-136, 2014.
  15. W.C. Shin, "Performance Comparison of Parallel Programming Frameworks in Digital Image Transformation," International Journal of Internet, Broadcasting and Communication, vol. 11, no. 3, pp 1-7, Aug. 2019. https://doi.org/10.7236/ijibc.2019.11.3.1