Yun (Helen) He and Chris Ding, Lawrence Berkeley National Laboratory, June 24, 2004: Hybrid OpenMP and MPI Programming and Tuning (NUG2004).
Texas Advanced Computing Center: Stampede2 User Guide: Hybrid Model.
Message Passing Interface Forum: MPI-2: MPI and Threads (specific section of the MPI-2 report).
Intel Corp.: Thread Affinity Interface (Linux and Windows), from the User and Reference Guide for the Intel C++ Compiler 15.0.
Introduction
This module will show you how to combine parallel programming techniques from MPI and OpenMP when writing applications for Stampede2 and help you understand the motivation for doing so.
Goals
At the end of this topic, you should understand the principles that motivate a blend of shared- and distributed-memory parallel programming styles for the Stampede2 architecture. You should also be able to understand and run advanced programs that combine MPI and OpenMP parallelization techniques on Stampede2.
Prerequisites
You should have some basic working knowledge of MPI and OpenMP, at the level of the Stampede2 Virtual Workshop topics on Message Passing Interface (MPI) and OpenMP.
You should also be familiar with common Linux shell commands, either in bash or in csh.