[This is preliminary documentation and is subject to change.]

The MPI namespace contains classes that provide access to the Message Passing Interface (MPI) for writing distributed, parallel programs to be run on Windows clusters.

Classes

  ClassDescription
Communicator
The communicator class abstracts a set of communicating processes in MPI. Communicators are the most fundamental types in MPI, because they are the basis of all inter-process communication. Each communicator provides a separate communication space for a set of processes, so that the messages transmitted with one communicator will not collide with messages transmitted with another communicator. As such, different libraries and different tasks can all use MPI without colliding, so long as they are using different communicators. There are two important kinds of communicators: intracommunicators and intercommunicators. Intracommunicators are the most commonly used form of communicator. Each intracommunicator contains a set of processes, each of which is identified by its "rank" within the communicator. The ranks are numbered 0 through Size-1. Any process in the communicator can Send<(Of <(T>)>)(T, Int32, Int32) a message to another process within the communicator or Receive<(Of <(T>)>)(Int32, Int32) a message from any other process in the communicator. Intracommunicators also support a variety of collective operations that involve all of the processes in the communicator. Most MPI communication occurs within intracommunicators, with very few MPI programs requiring intercommunicators. Intercommunicators differ from intracommunicators in that intercommunicators contain two disjoint groups of processes, call them A and B. Any process in group A can send a message to or receive a message from any process in group B, and vice-versa. However, there is no way to use an intercommunicator to send messages among the processes within a group. Intercommunicators are often useful in large MPI applications that tie together many, smaller modules. Typically, each module will have its own intracommunicators and the modules will interact with each other via intercommunicators.
CompletedStatus
Information about a specific message that has already been transferred via MPI.
DatatypeCache
Provides a mapping from .NET types to their corresponding MPI datatypes. This class should only be used by experts in both MPI's low-level (C) interfaces and the interaction between managed and unmanaged code in .NET.
Environment
Provides MPI initialization, finalization, and environmental queries.
Group
The Group class provides the ability to manipulate sets of MPI processes.
Intercommunicator
Intercommunicators are Communicators that contain two disjoint groups of processes, call them A and B. Any process in group A can send a message to or receive a message from any process in group B, and vice-versa. However, there is no way to use an intercommunicator to send messages among the processes within a group. Intercommunicators are often useful in large MPI applications that tie together many, smaller modules. Typically, each module will have its own intracommunicators and the modules will interact with each other via intercommunicators.
Intracommunicator
Intracommunicators are the most commonly used form of communicator in MPI. Each intracommunicator contains a set of processes, each of which is identified by its "rank" within the communicator. The ranks are numbered 0 through Size-1. Any process in the communicator can send a message to another process within the communicator or receive a message from any other process in the communicator. Intracommunicators also support a variety of collective operations that involve all of the processes in the communicator. Most MPI communication occurs within intracommunicators, with very few MPI programs requiring intercommunicators.
Operation<(Of <(T>)>)
The Operation class provides reduction operations for use with the reduction collectives in the Communicator class, such as Allreduce<(Of <(T>)>)(T, ReductionOperation<(Of <(T>)>)). For example, the Add property is a delegate that adds two values of type T, while Min returns the minimum of the two values. The reduction operations provided by this class should be preferred to hand-written reduction operations (particularly for built-in types) because it enables additional optimizations in the MPI library. The Operation class also has a second role for users that require access to the low-level MPI interface. Creating an instance of the Operation class will find or create an appropriate MPI_Op for that reduction operation. This MPI_Op, accessible through the Op property, can be used with low-level MPI reduction operations directly.
ReceiveRequest
A non-blocking receive request. This class allows one to test a receive request for completion, wait for completion of a request, cancel a request, or extract the value received by this communication request.
Request
A non-blocking communication request. Each request object refers to a single communication operation, such as non-blocking send (see ImmediateSend<(Of <(T>)>)(T, Int32, Int32)) or receive. Non-blocking operations may progress in the background, and can complete without any user intervention. However, it is crucial that outstanding communication requests be completed with a successful call to Wait()()() or Test()()() before the request object is lost.
RequestList
A request list contains a list of outstanding MPI requests. These requests are typically non-blocking send or receive operations (e.g., ImmediateSend<(Of <(T>)>)(T, Int32, Int32), ImmediateReceive<(Of <(T>)>)(Int32, Int32)). The request list provides the ability to operate on the set of MPI requests as a whole, for example by waiting until all requests complete before returning or testing whether any of the requests have completed.
Status
Contains information about a specific message transmitted via MPI.
Unsafe
Direct, low-level interface to the system MPI library. This low-level interface provides direct access to the unmanaged MPI library provided by the system. It is by nature unsafe, and should only be used by programmers experienced both in the use of MPI from lower-level languages (e.g., C, Fortran) and with an understanding of the interaction between managed and unmanaged code, especially those issues that pertain to memory pinning/unpinning.

Structures

  StructureDescription
DatatypeCache..::.LowerBound
Placeholder type that is used to mark the lower bound of an MPI derived data type. This placeholder type should only be used by MPI experts that require the flexibility of the lowest-level MPI for constructing special MPI datatypes. This type is mapped to MPI_LB.
DatatypeCache..::.Packed
Placeholder type that is used to indicate that data being sent by one of the low-level MPI routines is packed by MPI_Pack(IntPtr, Int32, Int32, IntPtr, Int32, Int32*, Int32) and will be unpacked by MPI_Unpack(IntPtr, Int32, Int32*, IntPtr, Int32, Int32, Int32).
DatatypeCache..::.UpperBound
Placeholder type that is used to mark the upper bound of an MPI derived data type. This placeholder type should only be used by MPI experts that require the flexibility of the lowest-level MPI for constructing special MPI datatypes. This type is mapped to MPI_UB.
Unsafe..::.MPI_Status
Low-level representation of the status of an MPI communication operation. Unless you are interacting directly with the low-level MPI interface, use Status.

Delegates

  DelegateDescription
ReductionOperation<(Of <(T>)>)
A reduction operation that combines two values to produce a third value. Used by various collectives operations such as Allreduce<(Of <(T>)>)(T, ReductionOperation<(Of <(T>)>)).

Enumerations

  EnumerationDescription
Comparison
The intermediate of a comparison between two MPI objects.
Threading
Enumeration describing the level of threading support provided by the MPI implementation. The MPI environment should be initialized with the minimum threading support required for your application, because additional threading support can have a negative impact on performance. The four options providing monotonically-increasing levels of freedom for the MPI program. Thus, a program implemented based on the Threading.Single semantics will work perfectly well (although perhaps less efficiently) with Threading.Multiple.