Pervasive Technology Labs logo

MPI.NET Tutorial: Hello, World!

  |   Home   |   Download   |   Documentation   |  

In This Section

Create a New Project

To create an MPI "Hello, World!", we'll first create a new C# console application in Visual Studio. We chose the project name MPIHello for this new project.

Creating a new C# console application

Reference the MPI.NET Assembly

Once you've created your project, you need to add a reference to the MPI.NET assembly in Visual Studio. This will allow your program to use MPI.NET's facilities, and will also give you on-line help for MPI.NET's classes and functions. In the Solution Explorer, right click on "References" and select "Add Reference...":

Adding a reference to a Visual Studio project

Next, scroll down to select the "Message Passing Interface" item from the list of components under the .NET tab, then click "OK" to add a reference to the MPI.NET assembly.

Adding a reference to the Message Passing Interface

Now, we're ready to write some MPI code!

Writing Hello, World!

The first step in any MPI program is to initialize the MPI environment. All of the MPI processes will need to do this initialization before attempting to use MPI in any way. To initialize the MPI environment, we first bring in the MPI namespace with a using stataement. Then, we create a new instance of MPI.Environment within our Main routine, passing the new object a reference to our command-line arguments:

using System;
using MPI;

class MPIHello
    static void Main(string[] args)
        using (new MPI.Environment(ref args))
            // MPI program goes here!

The entirety of an MPI program should be contained within the using statement, which guarantees that the MPI environment will be properly finalized (via MPI.Communicator.Dispose) before the program exits. All valid MPI programs must both initialize and finalize the MPI environment. We pass in a reference to our command-line arguments, args, because MPI implementations are permitted to use special command-line arguments to pass state information in to the MPI initialization routines (although few MPI implementations actually do this). In theory, MPI could remove some MPI-specific arguments from args, but in practice args will be untouched.

Now that we have the MPI environment initialized, we can write a simple program that prints out a string from each process. Inside the using statement, add the line:

Console.WriteLine("Hello, World! from rank " +
                  + " (running on " + MPI.Environment.ProcessorName + ")");

Each MPI process will execute this code independently (and currently), and each will likely produce slightly different results. For example, MPI.Environment.ProcessorName returns the name of the computer on which a process is running, which could differ from one MPI process to the next (if we're running our program on a cluster). Similarly, we're printing out the rank of each process via We'll talk about communicators a bit more later.

Running Hello, World!

To execute our "Hello, World!" program, navigate to the binary directory for your project (e.g., MPIHello\bin\Debug) and run some number of copies of the program with mpiexec:

C:\MPIHello\bin\Debug>mpiexec -n 8 MPIHello.exe
Hello, World! from rank 0 (running on jeltz)
Hello, World! from rank 6 (running on jeltz)
Hello, World! from rank 3 (running on jeltz)
Hello, World! from rank 7 (running on jeltz)
Hello, World! from rank 4 (running on jeltz)
Hello, World! from rank 1 (running on jeltz)
Hello, World! from rank 2 (running on jeltz)
Hello, World! from rank 5 (running on jeltz)

Notice that we have 8 different lines of output, one for each of the 8 MPI processes we started as part of our MPI program. Each will output it rank (from 0 to 7) and the name of the processor or machine it is running on. The output you receive from running this program will be slightly different from the output shown here, and will probably differ from one invocation to the next. Since the processes are running concurrently, we don't know in what order the processes will finish the call to WriteLine and write that output to the screen. To actually enforce some ordering, the processes would have to communicate.

MPI Communicators

In the "Hello, World!" example, we referenced the Communicator class in MPI.NET to determine the rank of each process. MPI communicators are the fundamental abstraction that permits communication among different MPI processes, and every non-trivial MPI program will make use of some communicators.

Each communicator representations a self-contained communication space for some set of MPI processes. Any of the processes in that communicator can exchange messages with any other process in that communicator, without fear of those messages colliding with any messages being transmitted on a different communicator. MPI programs often use several different communicators for different tasks: for example, the main MPI program may use one communicator for control messages that direct the program based on user input, while certain subgroups of the processes in that program use their own communicators to collaborate on subtasks in that program. Since each of the communicators is a completely distinct communication space, there is no need to worry about having the "control" messages from the user clash with the messages that the subgroups exchange while working on a task in the program. There are two major properties of communicators used by essentially every MPI program: the rank of the process within the communicator, which identifies that process, and the size of the communicator, which provides the number of processes in the communicator.

Every MPI program begins with only two communicators defined, world and self. The world communicator (written as is a communicator that contains all of the MPI processes that the MPI program started with. So, if the user started 8 MPI processes via mpiexec, as we did above, all 8 of those processes can communicator via the world communicator. In our "Hello, World!" program, we printed out the rank of each process within the world communicator. The self communicator is quite a bit more limited: each process has its own self communicator, which contains only its own process and nothing more. We will not refer to the self communicator again in this tutorial, because it is rarely used in MPI programs. From the initial two communicators, world and self, the user (or other libraries build on top of MPI) can create their own communicators, either by cloning a commnicator (which produces a communicator with the same processes, same ranks, but a separate communicator space) or by selecting subgroups of those processes.

Now that we've written "Hello, World!" and have introduced MPI communicators, we'll move on to the most important part of MPI: passing messages between the processes in an MPI program.

Previous: Installation Next: Point-to-Point Communication