Introduction

  • Objectives of this Tutorial
    1. Introduces you to the fundamentals of MPI by ways of F77, F90 and C examples
    2. Shows you how to compile, link and run MPI code
    3. Covers additional MPI routines that deal with virtual topologies
    4. Cites references
  • What is MPI?
    1. MPI stands for Message Passing Interface and its standard is set by the Message Passing Interface Forum
    2. It is a library of subroutines/functions, NOT a language
    3. MPI subroutines are callable from Fortran and C
    4. Programmer writes Fortran/C code with appropriate MPI library calls, compiles with Fortran/C compiler, then links with Message Passing library
  • Why MPI?
    1. For large problems that demand better turn-around time (and access to more memory)
    2. For Fortran "dusty deck", often it would be very time-consuming to rewrite code to take advantage of parallelism. Even in the case of SMP, as are the SGI PowerChallengeArray and Origin2000, automatic parallelizer might not be able to detect parallelism.
    3. For distributed memory machines, such as a cluster of Unix work stations or a cluster of NT/Linux PCs.
    4. Maximize portability; works on distributed and shared memory architectures.

Preliminaries of MPI Message Passing

  • In a user code, wherever MPI library calls occur, the following header file must be included:
    #include "mpi.h" for C code or
    include "mpif.h" for Fortran code
    These files contain definitions of constants, prototypes, etc. which are neccessary to compile a program that contains MPI library calls
  • MPI is initiated by a call to MPI_Init. This MPI routine must be called before any other MPI routines and it must only be called once in the program.
  • MPI processing ends with a call to MPI_Finalize.
  • Essentially the only difference between MPI subroutines (for Fortran programs) and MPI functions (for C programs) is the error reporting flag. In fortran, it is returned as the last member of the subroutine's argument list. In C, the integer error flag is returned through the function value. Consequently, MPI fortran routines always contain one additional variable in the argument list than the C counterpart.
  • C's MPI function names start with "MPI_" and followed by a character string with the leading character in upper case letter while the rest in lower case letters. Fortran subroutines bear the same names but are case-insensitive.

Learn Basic MPI Routines Through Examples

There are essentially two different paradigms in MPI programming, SPMD (Single Program Multiple Data) and MPMD (Multiple Programs Multiple Data). The example programs shown below employ the SPMD paradigm, i.e., an identical copy of the same program is used for each of the processes.

While each example below is self-contained, it is highly recommended that the readers follow them in chronological order so that many of the finer points can be suitably demonstrated or explained in progression.

  • Example 1. Basics of Numerical Integration
    • Example 1.1 Parallel Integration with MPI_Send, MPI_Recv
    • Example 1.2 Parallel Integration with MPI_Send, MPI_Recv (modified)
    • Example 1.3 Parallel Integration with MPI_Isend, MPI_Recv
    • Example 1.4 Parallel Integration with MPI_Gather
    • Example 1.5 Parallel Integration with MPI_Bcast, MPI_Reduce
  • You can download the above examples, along with the appropriate makefiles and batch scripts for the four parallel systems maintained by SCV.

Compilation and Execution

The Research Computing Services group (RCS) at Boston University maintains the Shared Computing Cluster (SCC), a large Linux cluster located in Holyoke, MA. Provided below are the links to the instructions on compilation and running jobs for the SCC.

More MPI Routines

In addition to the basic MPI routines demonstrated above, there are many other routines for various applications. Some of the more frequently used routines, grouped according to their functionalities, are discussed below:

countReferences

There are a number of MPI references available.

Books :

  1. Parallel Programming with MPI by P. S. Pacheco, Morgan Kaufmann, 1997
  2. Using MPI by W. Gropp, E. Lusk and A. Skjellum, The MIT Press, 1994
Online Documents:

  1. MPI: The Complete Reference by M. Snir, et. al., The MIT Press, 1996
  2. MPI: A Message-Passing Interface Standard Version 4.0, MPI Forum, June 2022.

Here is the complete list of MPI routines and constants at the Argonne National Laboratory.


Your suggestions and comments are welcomed; please send them to help@scc.bu.edu.