Extending OpenMP on distributed memory systems via global arrays
Distributed Memory Systems (MIS) or Computer Clusters have been widely deployed in scientific computing as well as business processing clue to its scalable performance while at a very modest coat. The importance of a parallel programming AN for Distributed Memory Systems that facilitates programmer productivity is increasingly recognized. OpenMP is the de facto parallel programming standard for shared memory systems with high productivity of programming; however, it is not immediately available for DMS.
The goal of this research is to extend the high level language OpenMP to DMs to achieve ease of programming. In this dissertation, we propose a novel approach to extending OpenMP to DMS, in which we translate OpenMP into Global Arrays, a, library-based programming model. We implemented a basic translation in the OpenUH compiler and evaluated our approach using a variety of benchmarks oil different cluster platforms. Moreover, we explored compiler technologies and language extensions to increase the attractiveness of the approach. A new language extension has been proposed for more scalability and is currently under active discussion by the OpenMP ARB for OpenMP 3.0. The work has introduced and demonstrated the feasibility of an easy and productive programming model for both Shared and Distributed Memory Systems. We believe that the work meets an increasing need for a high productivity and high performance parallel programming model on the existing and emerging hierarchical parallel platforms.