[This is preliminary documentation and is subject to change.]
Attributes are key/value pairs that can be attached to communicators, and are generally used to maintain communicator-specific information across different libraries or different languages. Each instance of the Attribute class is the "key" for a different kind of data, and can be used to index the AttributeSet associated with a communicator to query or modify the corresponding value. Each attribute is created with the type of data its value will store (e.g., an integer, a string, an object) and a duplication policy, which describes how a value of that type is propagated when communicators are cloned.
Attributes can be used for interoperability with native code. Any attribute whose type is a value type (e.g., primitive type or structure) that has either deep-copy or no-copy semantics will be stored on the low-level MPI communicator. These attributes can then be accessed by native code directly via the low-level MPI interface.
Contains the attributes attached to a communicator. Each communicator can contain key/value pairs with extra information about the communicator that can be queried from other languages and compilers. The keys in the attribute set are instances of the Attribute class, each of which will be associated with a specific type of value. The values associated with any attribute can be added, modified, queried, or removed for a particular communicator.
When a communicator is cloned, the attributes are copied to the new communicator. When creating an Attribute, decide whether not to copy the attribute (None, to copy only a reference to the attribute (Shallow, or make a clone of the attribute (Deep.
A CartesianCommunicator is a form of Intracommunicator that possess extra topological information, and where all the processes in the communicator are arranged according to a grid of arbitrary dimensions. Each node in a CartesianCommunicator has not only a rank but also coordinates indicating its place in the n-dimensional grid. Grids may be specified as periodic (or not) in any dimension, allowing cylinder and torus configurations as well.
The communicator class abstracts a set of communicating processes in MPI. Communicators are the most fundamental types in MPI, because they are the basis of all inter-process communication. Each communicator provides a separate communication space for a set of processes, so that the messages transmitted with one communicator will not collide with messages transmitted with another communicator. As such, different libraries and different tasks can all use MPI without colliding, so long as they are using different communicators. There are two important kinds of communicators: intracommunicators and intercommunicators.
Intracommunicators are the most commonly used form of communicator. Each intracommunicator contains a set of processes, each of which is identified by its "rank" within the communicator. The ranks are numbered 0 through Size-1. Any process in the communicator can Send<(Of <(T>)>)(T, Int32, Int32) a message to another process within the communicator or Receive<(Of <(T>)>)(Int32, Int32) a message from any other process in the communicator. Intracommunicators also support a variety of collective operations that involve all of the processes in the communicator. Most MPI communication occurs within intracommunicators, with very few MPI programs requiring intercommunicators.
Intercommunicators differ from intracommunicators in that intercommunicators contain two disjoint groups of processes, call them A and B. Any process in group A can send a message to or receive a message from any process in group B, and vice-versa. However, there is no way to use an intercommunicator to send messages among the processes within a group. Intercommunicators are often useful in large MPI applications that tie together many, smaller modules. Typically, each module will have its own intracommunicators and the modules will interact with each other via intercommunicators.
Information about a specific message that has already been transferred via MPI.
Provides a mapping from .NET types to their corresponding MPI datatypes. This class should only be used by experts in both MPI's low-level (C) interfaces and the interaction between managed and unmanaged code in .NET.
Provides MPI initialization, finalization, and environmental queries.
The Group class provides the ability to manipulate sets of MPI processes.
Intercommunicators are Communicators that contain two disjoint groups of processes, call them A and B. Any process in group A can send a message to or receive a message from any process in group B, and vice-versa. However, there is no way to use an intercommunicator to send messages among the processes within a group. Intercommunicators are often useful in large MPI applications that tie together many, smaller modules. Typically, each module will have its own intracommunicators and the modules will interact with each other via intercommunicators.
Intracommunicators are the most commonly used form of communicator in MPI. Each intracommunicator contains a set of processes, each of which is identified by its "rank" within the communicator. The ranks are numbered 0 through Size-1. Any process in the communicator can send a message to another process within the communicator or receive a message from any other process in the communicator. Intracommunicators also support a variety of collective operations that involve all of the processes in the communicator. Most MPI communication occurs within intracommunicators, with very few MPI programs requiring intercommunicators.
An exception thrown when an MPI message has been truncated on receive.
The Operation class provides reduction operations for use with the reduction collectives in the Communicator class, such as Allreduce<(Of <(T>)>)(T, ReductionOperation<(Of <(T>)>)). For example, the Add property is a delegate that adds two values of type T, while Min returns the minimum of the two values. The reduction operations provided by this class should be preferred to hand-written reduction operations (particularly for built-in types) because it enables additional optimizations in the MPI library. The Operation class also has a second role for users that require access to the low-level MPI interface. Creating an instance of the Operation class will find or create an appropriate MPI_Op for that reduction operation. This MPI_Op, accessible through the Op property, can be used with low-level MPI reduction operations directly.
A non-blocking receive request. This class allows one to test a receive request for completion, wait for completion of a request, cancel a request, or extract the value received by this communication request.
A non-blocking communication request. Each request object refers to a single communication operation, such as non-blocking send (see ImmediateSend<(Of <(T>)>)(T, Int32, Int32)) or receive. Non-blocking operations may progress in the background, and can complete without any user intervention. However, it is crucial that outstanding communication requests be completed with a successful call to Wait()()() or Test()()() before the request object is lost.
A request list contains a list of outstanding MPI requests. These requests are typically non-blocking send or receive operations (e.g., ImmediateSend<(Of <(T>)>)(T, Int32, Int32), ImmediateReceive<(Of <(T>)>)(Int32, Int32)). The request list provides the ability to operate on the set of MPI requests as a whole, for example by waiting until all requests complete before returning or testing whether any of the requests have completed.
Contains information about a specific message transmitted via MPI.
Direct, low-level interface to the system MPI library. This low-level interface provides direct access to the unmanaged MPI library provided by the system. It is by nature unsafe, and should only be used by programmers experienced both in the use of MPI from lower-level languages (e.g., C, Fortran) and with an understanding of the interaction between managed and unmanaged code, especially those issues that pertain to memory pinning/unpinning.
Placeholder type that is used to indicate that data being sent by one of the low-level MPI routines is packed by MPI_Pack(IntPtr, Int32, Int32, IntPtr, Int32, Int32%, Int32) and will be unpacked by MPI_Unpack(IntPtr, Int32, Int32%, IntPtr, Int32, Int32, Int32).
Low-level representation of the status of an MPI communication operation. Unless you are interacting directly with the low-level MPI interface, use Status.
A reduction operation that combines two values to produce a third value. Used by various collectives operations such as Allreduce<(Of <(T>)>)(T, ReductionOperation<(Of <(T>)>)).
Delegate describing a low-level MPI function used to copy attribute values from a communicator when the communicator is being duplicated with MPI_Comm_dup(Int32, Int32%).
Delegate describing a low-level MPI function that takes care of de-allocating an attribute when it is deleted from a communicator (or the communicator itself is freed). Often used when the attribute's value is a pointer to some per-communicator data, and the pointer needs to be freed.
Enumeration describing how a given attribute should be copied (or not) when the communicator is cloned (duplicated).
The intermediate of a comparison between two MPI objects.
Enumeration describing the level of threading support provided by the MPI implementation. The MPI environment should be initialized with the minimum threading support required for your application, because additional threading support can have a negative impact on performance. The four options providing monotonically-increasing levels of freedom for the MPI program. Thus, a program implemented based on the Threading.Single semantics will work perfectly well (although perhaps less efficiently) with Threading.Multiple.