Pervasive Technology Labs logo

MPI.NET Tutorial: Introduction

  |   Home   |   Download   |   Documentation   |  

In This Section


This tutorial will help you install and use MPI.NET, a .NET library that enables the creation of high-performance parallel applications that can be deployed on multi-threaded workstations and Windows clusters. MPI.NET provides access to the Message Passing Interface (MPI) in C# and all of the other .NET languages. MPI is a standard for message-passing programs that is widely implemented and used for high-performance parallel programs that execute on clusters and supercomputers.

By the end of this tutorial, you should be able to:

  • Install MPI.NET and its prerequisites.
  • Write parallel MPI applications for deployment on Windows workstations and clusters using point-to-point and collective communication.
  • Execute parallel MPI applications locally and on a cluster.

Other Tutorials

This tutorial is written for online reading and uses C# for all of its examples. Please see the main documentation page for tutorials formatted for printing/offline reading and tutorials using other languages (e.g., Python).

MPI Programming Model

The MPI programming model is, as its name implies, is based on message passing. In a message-passing system, different concurrently-executing processes communicate by sending messages from one to another over a network. Unlike multi-threading, where different threads share the same program state, each of the MPI processes has its own, local program state that cannot be observed or modified by any other process except in response to a message. Therefore, the MPI processes themselves can be as distributed as the network permits, with different processes running on different machines or even different architectures.

Most MPI programs are written with the Single Program, Multiple Data (SPMD) parallel model, where each of the processes is running the same program but working on a different part of the data. SPMD processes will typically perform a significant amount of computation on the data that is available locally (within that process's local memory), communicating with the other processes in the parallel program at the boundaries of the data. For example, consider a simple program that computes the sum of all of the elements in an array. The sequential program would loop through the array summing all of the values to produce a result. In a SPMD parallel program, the array would be broken up into several different pieces (one per process), and each process would sum the values in its local array (using the same code that the sequential program would have used). Then, the processes in the parallel program would communicate to combine their local sums into a global sum for the array.

MPI supports the SPMD model by allowing the user to easily launch the same program across many different machines (nodes) with a single command. Initially, each of the processes are identical, with one distinguishing characteristic: each process is assigned a rank, which uniquely identifies that process. The ranks of MPI processes are integer values from 0 to P-1, where P is the number of processes launched as part of the MPI program. MPI processes can query their rank, allowing different processes in the MPI program to have different behavior, and exchange messages with other processes in the same job via their ranks.

Next: Installation