[This is preliminary documentation and is subject to change.]

The Unsafe type exposes the following members.

Constructors

  NameDescription
Unsafe

Methods

  NameDescription
ComparisonFromInt
Converts the intermediate of an MPI_*_compare operation into a Comparison enum value.
Equals (Inherited from Object.)
Finalize (Inherited from Object.)
GetHashCode (Inherited from Object.)
GetType (Inherited from Object.)
MemberwiseClone (Inherited from Object.)
MPI_Abort
Aborts the current MPI program.
MPI_Address
Converts a pointer into an address for use with MPI. In many cases, this operation is simply a cast from the pointer's value to an integer.
MPI_Allgather
Gather the values provided by each process into an array containing the contributions of all of the processes. This operation is equivalent to a MPI_Gather(IntPtr, Int32, Int32, IntPtr, Int32, Int32, Int32, Int32) to an arbitrary root followed by an MPI_Bcast(IntPtr, Int32, Int32, Int32, Int32) from that root. See Allgather<(Of <(T>)>)(T)
MPI_Allgatherv
Gather the values provided by each process into an array containing the contributions of all of the processes. This operation differs from MPI_Allgather(IntPtr, Int32, Int32, IntPtr, Int32, Int32, Int32) in that it permits different processes to provide a different number of elements to be gathered. See Allgather<(Of <(T>)>)(T)
MPI_Alloc_mem
Attempts to allocate (unmanaged) memory from MPI. This memory must be manually freed with a call to MPI_Free_mem(IntPtr).
MPI_Allreduce
Perform a parallel reduction operation that summarizes the results from the input provided by all of the processes in the communicator. Semantically, this is equivalent to an MPI_Reduce(IntPtr, IntPtr, Int32, Int32, Int32, Int32, Int32) to an arbitrary root followed by an MPI_Bcast(IntPtr, Int32, Int32, Int32, Int32) from that process. See Allreduce<(Of <(T>)>)(T, ReductionOperation<(Of <(T>)>))
MPI_Alltoall
MPI_Alltoallv
Transmits data from every process in a communicator to every other process in the communicator. Similar to MPI_Allgatherv(IntPtr, Int32, Int32, IntPtr, array<Int32>[]()[], array<Int32>[]()[], Int32, Int32), except that each process can send different data to every other process. If all of your processes send the same amount of data to each other, use the simpler MPI_Alltoall(IntPtr, Int32, Int32, IntPtr, Int32, Int32, Int32); if you need the data sent to different processes to have different datatypes, use MPI_Alltoallw(IntPtr, array<Int32>[]()[], array<Int32>[]()[], array<Int32>[]()[], IntPtr, array<Int32>[]()[], array<Int32>[]()[], array<Int32>[]()[], Int32). See Alltoall<(Of <(T>)>)(array<T>[]()[]).
MPI_Alltoallw
Transmits data from every process in a communicator to every other process in the communicator. Similar to MPI_Allgatherv(IntPtr, Int32, Int32, IntPtr, array<Int32>[]()[], array<Int32>[]()[], Int32, Int32), except that each process can send different data to every other process. If all of your processes send the same amount of data to each other, use the simpler MPI_Alltoall(IntPtr, Int32, Int32, IntPtr, Int32, Int32, Int32); if the volume of data sent to each process can be different but all of the data has the same type, use MPI_Alltoallv(IntPtr, array<Int32>[]()[], array<Int32>[]()[], Int32, IntPtr, array<Int32>[]()[], array<Int32>[]()[], Int32, Int32). See Alltoall<(Of <(T>)>)(array<T>[]()[]).
MPI_Attr_delete
Deletes an attribute stored on the communicator. See AttributeSet.
MPI_Attr_get
Retrieves the value of an attribute on a communicator.
MPI_Attr_put
Sets the value of an attribute on a communicator. See AttributeSet.
MPI_Barrier
A synchronization barrier where no processor leaves the barrier until all processors have entered the barrier. See Barrier()()().
MPI_Bcast
Broadcast a value from the root process to every process within the communication. See Broadcast<(Of <(T>)>)(T%, Int32).
MPI_Cancel
Cancel an outstanding MPI communication request. See Cancel()()().
MPI_Cart_coords
Determines the coordinates of a process given its rank in the Cartesian communicator.
MPI_Cart_create
Creates a new Cartesian communicator from another communicator.
MPI_Cart_get
Retrieves the primary topological information on a Cartesian communicator: the number of dimensions, the size in each dimension, the periodicity in each dimension. Also gives the coordinates of the calling process.
MPI_Cart_map
Returns a recommended configuration for a new Cartesian grid.
MPI_Cart_rank
Determines the rank of a process in the Cartesian communicator given its coordinates.
MPI_Cart_shift
Calculates the necessary source and destination ranks for shifting data over the Cartesian communicator.
MPI_Cart_sub
Create a lesser dimensional grid from an existing Cartesian communicator.
MPI_Cartdim_get
Gets the number of dimensions in the Cartesian communicator.
MPI_Comm_compare
Compare two communicators. See Compare(Communicator).
MPI_Comm_create
Creates a new communicator from a subgroup of the processes in an existing communicator. See Create(Group).
MPI_Comm_dup
Duplicates a communicator, creating a new communicator with the same processes and ranks as the original. See Clone()()().
MPI_Comm_free
Frees a communicator. This routine will be invoked automatically by Dispose()()() or the finalizer for Communicator.
MPI_Comm_group
Retrieve the group associated with a communicator. See Group.
MPI_Comm_rank
Determines the rank of the calling process in the communicator. See Rank.
MPI_Comm_remote_group
Retrieves the remote group from an intercommunicator. See RemoteGroup.
MPI_Comm_remote_size
Determines the number of processes in the remote group of an intercommunicator. See RemoteSize.
MPI_Comm_size
Determines the number of processes in the communicator. See Size.
MPI_Comm_split
Splits a communicator into several new communicators, based on the colors provided. See Split(Int32, Int32).
MPI_Comm_test_inter
Determine whether a communicator is an intercommunicator. In MPI.NET, intercommunicators will have type Intercommunicator.
MPI_Dims_create
Suggest a shape for a new Cartesian communicator, given the number of dimensions.
MPI_Errhandler_create
Creates a new MPI error handler from a user function. Attaching this error handler to a communicator will invoke the user error handler when an error occurs. This feature is not supported in MPI.NET.
MPI_Errhandler_free
Free a user-defined error handler that was created with MPI_Errhandler_create(IntPtr, Int32%).
MPI_Errhandler_get
Retrieve the error handler for a given communicator.
MPI_Errhandler_set
Set the error handler for a given communicator.
MPI_Error_class
Maps an MPI error code into an error class. Error classes describe (in general) what kind of error occurred, and can be used to provide better information to the user. The MPI_ERR_* constants give the various error classes present in MPI.
MPI_Error_string
Retrieves an error string corresponding to the given MPI error code.
MPI_Exscan
Performs a partial exclusive reduction on the data, returning the result from combining the data provided by the first P-1 processes to the process with rank P. See ExclusiveScan<(Of <(T>)>)(T, ReductionOperation<(Of <(T>)>))
MPI_Finalize
Finalizes (shuts down) MPI. This routine must be called before exiting the program. It will be invoked by Dispose()()().
MPI_Finalized
Determine whether MPI has already been finalized. See Environment..::.Finalized.
MPI_Free_mem
Frees memory allocated with MPI_Alloc_mem(IntPtr, Int32, IntPtr%).
MPI_Gather
Gather the values provided by each process into an array containing the contributions of all of the processes. This routine differs from MPI_Allgather(IntPtr, Int32, Int32, IntPtr, Int32, Int32, Int32) in that the results are gathered to only the "root" process, which is identified by its rank in the communicator. See Gather<(Of <(T>)>)(T, Int32)
MPI_Gatherv
Gather the values provided by each process into an array containing the contributions of all of the processes. This routine differs from MPI_Allgather(IntPtr, Int32, Int32, IntPtr, Int32, Int32, Int32) in that the results are gathered to only the "root" process, which is identified by its rank in the communicator. See Gather<(Of <(T>)>)(T, Int32)
MPI_Get_count
Determine the number of elements transmitted by a communication operation. See Count(Type).
MPI_Get_processor_name
Retrieve the name of the processor or compute node that is currently executing. See ProcessorName.
MPI_Graph_create
Create a topological Graph communicator.
MPI_Graph_get
Retrieve the index and edges arrays used to create the graph communicator.
MPI_Graph_map
Returns a recommended configuration for a new Graph communicator.
MPI_Graph_neighbors
Retrieve a list of the neighbors of a node.
MPI_Graph_neighbors_count
Retrieve the number of neighbors of a node.
MPI_Graphdims_get
Retrieve the dimensions of a Graph communicator.
MPI_Group_compare
Compare two groups. See Compare(Group).
MPI_Group_difference
Create a group from the difference of two groups. See Subtraction(Group, Group).
MPI_Group_excl
Create a subgroup containing all processes in existing group except those specified. See Exclude(array<Int32>[]()[]).
MPI_Group_free
Frees a group. This routine will be invoked automatically by Dispose()()() or the finalizer for Group.
MPI_Group_incl
Create a subgroup containing the processes with specific ranks in an existing group. See IncludeOnly(array<Int32>[]()[]).
MPI_Group_intersection
Create a group from the intersection of two groups. See BitwiseAnd(Group, Group).
MPI_Group_range_excl
Create a subgroup of processes containing all of the processes in the source group except those described by one of th provided(first, last, stride) rank triples. Note: this precise functionality is not exposed directly in the normal MPI layer; however, the same semantics can be attained with Exclude(array<Int32>[]()[]).
MPI_Group_range_incl
Create a subgroup of processes in a group, based on a set of (first, last, stride) rank triples. Note: this precise functionality is not exposed directly in the normal MPI layer; however, the same semantics can be attained with IncludeOnly(array<Int32>[]()[]).
MPI_Group_rank
Determine the rank of the calling process in a group. See Rank.
MPI_Group_size
Determine the number of processes in a group. See Size.
MPI_Group_translate_ranks
Translate the ranks of processes in one group into those processes' corresponding ranks in another group. See TranslateRanks(array<Int32>[]()[], Group).
MPI_Group_union
Create a group from the union of two groups. See BitwiseOr(Group, Group).
MPI_Init
Initializes MPI. This routine must be called before any other MPI routine. It will be invoked by the Environment constructor.
MPI_Init_thread
Initializes the MPI library with thread support. This operation subsumes MPI_Init(Int32%, Byte**%). See Environment(array<String>[]()[]%, Threading).
MPI_Initialized
Determine whether MPI has already been initialized. See Environment..::.Initialized.
MPI_Intercomm_create
Create a new intercommunicator from two disjoint intracommunicators. See Intercommunicator(Intracommunicator, Int32, Intracommunicator, Int32, Int32).
MPI_Intercomm_merge
Merge the two groups in an intercommunicator into a single intracommunicator. See Merge(Boolean)
MPI_Iprobe
Test whether a message is available. See ImmediateProbe(Int32, Int32).
MPI_Irecv
A non-blocking receive that posts the intent to receive a value. The actual receive will be completed when the corresponding request is completed.
MPI_Is_thread_main
Determine whether the calling thread is the main MPI thread (that called MPI_Init(Int32%, Byte**%) or MPI_Init_thread(Int32%, Byte**%, Int32, Int32%). See IsMainThread.
MPI_Isend
An immediate (non-blocking) point-to-point send. See ImmediateSend<(Of <(T>)>)(T, Int32, Int32).
MPI_Keyval_create
Creates a new MPI attribute that can be attached to communicators. See Create<(Of <(T>)>)(AttributeDuplication)
MPI_Keyval_free
Frees an attribute with the given key value. The user must ensure that this attribute has been deleted from all communicators before calling this routine.
MPI_Op_create
Creates an MPI operation that invokes a user-provided function. The MPI operation can be used with various reduction operations. MPI.NET provides support for user-defined operations via the Operation<(Of <(T>)>) class.
MPI_Op_free
Frees an MPI operation created via MPI_Op_create(IntPtr, Int32, Int32%). MPI.NET will automatically manage any operations it creates via Operation<(Of <(T>)>) when the corresponding object is disposed of or finalized.
MPI_Pack
Packs (serializes) data into a byte buffer. This serialized representation can be transmitted via MPI with the datatype MPI_PACKEDand unpacked with MPI_Unpack(IntPtr, Int32, Int32%, IntPtr, Int32, Int32, Int32). Serialization in MPI.NET is automatic, so this routine is very rarely used.
MPI_Pack_size
Determine the maximum amount of space that packing incount values with the MPI datatype datatype will require. This routine is useful for allocating buffer space when packing data with MPI_Pack(IntPtr, Int32, Int32, IntPtr, Int32, Int32%, Int32).
MPI_Probe
Wait until a message is available. See Probe(Int32, Int32).
MPI_Query_thread
Determine the level of threading support provided by the MPI library.
MPI_Recv
Receive a message from another process within the communicator. See Receive<(Of <(T>)>)(Int32, Int32).
MPI_Reduce
Perform a parallel reduction operation that summarizes the results from the data contributed by all of the processes in a communicator. Unlike MPI_Allreduce(IntPtr, IntPtr, Int32, Int32, Int32, Int32), the results of this operation are returned only to the process whose rank is equivalent to root, i.e., the "root" process. See Reduce<(Of <(T>)>)(T, ReductionOperation<(Of <(T>)>), Int32)
MPI_Reduce_scatter
MPI_Request_free
Free the resources associated with a request.
MPI_Scan
Performs a partial reduction on the data, returning the result from combining the data provided by the first P processes to the process with rank P. See Scan<(Of <(T>)>)(T, ReductionOperation<(Of <(T>)>))
MPI_Scatter
Scatters data from one process (the "root" process) to all of the processes in a communicator, with different parts of the data going to different processes. See Scatter<(Of <(T>)>)(array<T>[]()[], Int32).
MPI_Scatterv
Scatters data from one process (the "root" process) to all of the processes in a communicator, with different parts of the data going to different processes. Unlike MPI_Scatter(IntPtr, Int32, Int32, IntPtr, Int32, Int32, Int32, Int32), different processes may receive different amounts of data. See Scatter<(Of <(T>)>)(array<T>[]()[], Int32).
MPI_Send
Send a message to another process within the communicator. See Send<(Of <(T>)>)(T, Int32, Int32).
MPI_Test
Test whether the given request has completed. See Test()()().
MPI_Test_cancelled
Determine whether a particular communication operation was cancelled. See Cancelled.
MPI_Testall
Test whether all of the given MPI requests have been completed. See TestAll()()().
MPI_Testany
Test whether any of the MPI requests has completed. See TestAny()()().
MPI_Testsome
Providing a list of all of the requests that have completed, without waiting for any requests to complete. See TestSome()()().
MPI_Topo_test
Find out the communicator topology.
MPI_Type_commit
Completes creation of an MPI datatype. This routine will be called automatically when the MPI datatype is being generated via reflection in GetDatatype(Type).
MPI_Type_contiguous
Creates a new datatype from a contiguous block of values of the same type.
MPI_Type_extent
Determines the extent of the datatype.
MPI_Type_free
Frees an MPI datatype.
MPI_Type_hindexed
Creates a new datatype from discontiguous blocks of values of the same type.
MPI_Type_hvector
Creates a new datatype from a strided block of values of the same type.
MPI_Type_indexed
Creates a new datatype from discontiguous blocks of values of the same type.
MPI_Type_size
Computes the size of a datatype.
MPI_Type_struct
Creates a new datatype from a structure containing discontiguous blocks of different types. This is the most general type constructor, and is used by the DatatypeCache to create MPI datatypes from .NET value types.
MPI_Type_vector
Creates a new datatype from a strided block of values of the same type.
MPI_Unpack
Unpacks (deserializes) data from a byte buffer. The serialized representation will have been packed by MPI_Pack(IntPtr, Int32, Int32, IntPtr, Int32, Int32%, Int32) and (possibly) transmitted via MPI using the datatype MPI_PACKED. Serialization in MPI.NET is automatic, so this routine is very rarely used.
MPI_Wait
Wait until the given request has completed. See Wait()()().
MPI_Waitall
Wait until all of the given MPI requests have completed before returning. See WaitAll()()().
MPI_Waitany
Waits until any of the given MPI requests completes before returning. See WaitAny()()().
MPI_Waitsome
Wait until some MPI requests have completed, then provide a list of all of the requests that have completed. See WaitSome()()().
MPI_Wtick
Returns a resolution of MPI_Wtime()()(), in seconds. See TimeResolution.
MPI_Wtime
Returns a floating point number of seconds, since some time in the past See Time.
ToString (Inherited from Object.)

Fields

  NameDescription
MPI_ANY_SOURCE
Predefined value for the "source" parameter to MPI receive or probe operations, which indicates that a message from any process may be matched.
MPI_ANY_TAG
Predefined value for the "tag" parameter to MPI receive or probe operations, which indicates that a message with any tag may be matched.
MPI_BAND
Compute the bitwise AND via an MPI reduction operation. See BitwiseAnd
MPI_BOR
Compute the bitwise OR via an MPI reduction operation. See BitwiseOr
MPI_BOTTOM
A special marker used for the "buf" parameter to point-to-point operations and some collectives that indicates that the derived datatype contains absolute (rather than relative) addresses. The use of MPI_BOTTOM is not recommended. This facility is unused in C# and .NET.
MPI_BXOR
Compute the bitwise exclusive OR via an MPI reduction operation. See ExclusiveOr
MPI_BYTE
A single byte. This is equivalent to the byte type in C# and the System.Byte type in .NET.
MPI_CART
A constant used to indicate whether a communicator has a Cartesian topology.
MPI_CHAR
A single character. There is no equivalent to this type in C# or .NET.
MPI_COMM_NULL
Predefined communicator representing "no communicator". In the higher-level interface, this is represented by a nullCommunicator object.
MPI_COMM_SELF
Predefined communicator containing only the calling process. See self.
MPI_COMM_WORLD
Predefined communicator containing all of the MPI processes. See world.
MPI_CONGRUENT
Constant used in comparisons of MPI objects to denote that two objects are congruent, meaning that the objects act the same way but are not identical. See Congruent.
MPI_DATATYPE_NULL
A special datatype value that indicates "no datatype".
MPI_DOUBLE
A double-precision floating-point value. The equivalent is double in C# and System.Double in .NET.
MPI_ERR_ACCESS
Error class indicating that permission was denied when accessing a file.
MPI_ERR_AMODE
Error class indicating that the amode argument passed to MPI_File_open is invalid.
MPI_ERR_ARG
Error class indicating an invalid argument.
MPI_ERR_ASSERT
Error class indicating an invalid assert argument.
MPI_ERR_BAD_FILE
Error class indicating an invalid file name.
MPI_ERR_BASE
Error class indicating an invalid base argument.
MPI_ERR_BUFFER
Error class indicating an invalid buffer pointer.
MPI_ERR_COMM
Error class indicating an invalid communicator.
MPI_ERR_CONVERSION
Error class indicating that an error occurred in a user-supplied data conversion function.
MPI_ERR_COUNT
Error class indicating an invalid count argument.
MPI_ERR_DIMS
Error class indicating an invalid dimension argument (for cartesian communicators).
MPI_ERR_DISP
Error class indicating an invalid displacement argument.
MPI_ERR_DUP_DATAREP
Error class indicating that conversion functions could not be registered because a conversion function has already been registered for this data representation identifier.
MPI_ERR_FILE
Error class indicating an invalid file handle argument.
MPI_ERR_FILE_EXISTS
Error class indicating that the file already exists.
MPI_ERR_FILE_IN_USE
Error class indicating that the file is already in use.
MPI_ERR_GROUP
Error class indicating an invalid group argument.
MPI_ERR_IN_STATUS
Error class indicating that the actual error code is in the status argument.
MPI_ERR_INFO
Error class indicating an invalid info argument.
MPI_ERR_INFO_KEY
Error class indicating an invalid info key.
MPI_ERR_INFO_NOKEY
Error class indicating that the requested info key is not defined.
MPI_ERR_INFO_VALUE
Error class indicating an invalid info value.
MPI_ERR_INTERN
Error class indicating that an internal error occurred in the MPI implementation.
MPI_ERR_IO
Error class indicating an I/O error.
MPI_ERR_KEYVAL
Error class indicating an invalid attribute key.
MPI_ERR_LASTCODE
The last valid error code for a predefined error class.
MPI_ERR_LOCKTYPE
Error class indicating an invalid locktype argument.
MPI_ERR_NAME
Error class indicating that an attempt has been made to look up a service name that has not been published.
MPI_ERR_NO_MEM
Error class indicating that no memory is available when trying to allocate memory with MPI_Alloc_mem.
MPI_ERR_NO_SPACE
Error class indicating that there is not enough space for the file.
MPI_ERR_NO_SUCH_FILE
Error class indicating that no such file exists.
MPI_ERR_NOT_SAME
Error class indicating that a collective argument is not the same on all processes, or collective routines were called in a different order.
MPI_ERR_OP
Error class indicating an invalid operation argument.
MPI_ERR_OTHER
Error class indicating an error that is know, but not described by other MPI error classes.
MPI_ERR_PENDING
Error class indicating that a request is still pending.
MPI_ERR_PORT
Error class indicating that a named port does not exist or has been closed.
MPI_ERR_QUOTA
Error class indicating that the user's quota has been exceeded.
MPI_ERR_RANK
Error class indicating an invalid rank.
MPI_ERR_READ_ONLY
Error class indicating that the file is read-only.
MPI_ERR_REQUEST
Error class indicating an invalid request argument.
MPI_ERR_RMA_CONFLICT
Error class indicating that there were conflicting accesses within a window.
MPI_ERR_RMA_SYNC
Error class indicating that RMA calls were incorrectly synchronized.
MPI_ERR_ROOT
Error class indicating an invalid root.
MPI_ERR_SERVICE
Error class indicating that an attempt to unpublish a service name that has already been unpublished or never was published.
MPI_ERR_SIZE
Error class indicating an invalid size argument.
MPI_ERR_SPAWN
Error class indicating that an attempt to spawn a process has failed.
MPI_ERR_TAG
Error class indicating an invalid tag argument.
MPI_ERR_TOPOLOGY
Error class indicating an invalid topology for a communicator argument.
MPI_ERR_TRUNCATE
Error class indicating that a message was truncated on receive.
MPI_ERR_TYPE
Error class indicating an invalid data type argument.
MPI_ERR_UNKNOWN
Error class indicating that an unkown error occurred.
MPI_ERR_UNSUPPORTED_DATAREP
Error class indicating that an unsupported data representation was passed to MPI_FILE_SET_VIEW.
MPI_ERR_UNSUPPORTED_OPERATION
Error class indicating that an operation is unsupported.
MPI_ERR_WIN
Error class indicating an invalid window argument.
MPI_ERRHANDLER_NULL
Predefined error handler that represents "no" error handler.
MPI_ERRORS_ARE_FATAL
Predefined error handler that indicates that the MPI program should be terminated if an error occurs. This is the default error handler in the low-level MPI, which is overridden by MPI.NET.
MPI_ERRORS_RETURN
Predefined error handler that indicates that the MPI routine that detected an error should return an error code. MPI.NET uses this error handler to translate MPI errors into program exceptions.
MPI_FLOAT
A single-precision floating-point value. The equivalent is float in C# and System.Single in .NET.
MPI_GRAPH
A constant used to indicate whether a communicator has a Graph topology.
MPI_GROUP_EMPTY
An empty group containing no processes. See empty.
MPI_GROUP_NULL
A constant used to indicate the "null" group of processes. Corresponds to a null Group.
MPI_HOST
Predefined attribute key that can be used to determine the rank of the host process associated with MPI_COMM_WORLD. If there is no host, the result will be MPI_PROC_NULL. See HostRank.
MPI_IDENT
Constant used in comparisons of MPI objects to denote that two objects are identical. See Identical.
MPI_INFO_NULL
A special info key used to indicate that no extra information is being passed into a routine.
MPI_INT
A signed integer. This is equivalent to the int type in C# and System.Int32 in .NET.
MPI_IO
Predefined attribute key that can be used to determine the rank of the process than can perform I/O via the language-standard I/O mechanism. If every process can provided language-standard I/O, the resulting value will be MPI_ANY_SOURCE; if no process can support language-standard I/O, the result will be MPI_PROC_NULL. See IORank.
MPI_KEYVAL_INVALID
Special key value that indicates an invalid key.
MPI_LAND
Compute the logical AND via an MPI reduction operation. See LogicalAnd
MPI_LONG
A long signed integer. There is no equivalent in C# or .NET, because the 64-bit integer in C# and .NET is mapped to MPI_LONG_LONG_INT.
MPI_LONG_DOUBLE
An extended-precision floating-point value. There is no equivalent in C# or .NET.
MPI_LONG_LONG
A long long signed integer. The equivalent is long in C# and System.Int64 in .NET. This is a synonym for MPI_LONG_LONG_INT.
MPI_LONG_LONG_INT
A long long signed integer. The equivalent is long in C# and System.Int64 in .NET. This is a synonym for MPI_LONG_LONG.
MPI_LOR
Compute the logical OR via an MPI reduction operation. See LogicalOr
MPI_LXOR
Compute the logical exclusive OR via an MPI reduction operation. There is no high-level operation corresponding to this predefined MPI reduction.
MPI_MAX
Compute the maximum value via an MPI reduction operation. See Max.
MPI_MAX_ERROR_STRING
The maximum number of characters that can occur in an error string returned from MPI_Error_string(Int32, array<Byte>[]()[], Int32%).
MPI_MAX_PROCESSOR_NAME
The maximum length of the string returned by MPI_Get_processor_name(array<Byte>[]()[], Int32%).
MPI_MAXLOC
Compute the maximum value and location of that value via an MPI reduction operation. There is no high-level operation corresponding to this predefined MPI reduction.
MPI_MIN
Compute the minimum value via an MPI reduction operation. See Min.
MPI_MINLOC
Compute the minimum value and location of that value via an MPI reduction operation. There is no high-level operation corresponding to this predefined MPI reduction.
MPI_NULL_COPY_FN
Special "null" copy function that indicates that an attribute should not be copied.
MPI_NULL_DELETE_FN
Special "null" deletion function that indicates that no delete function should be called when an attribute is removed from a communicator.
MPI_OP_NULL
Placeholder operation that indicates "no operation".
MPI_PACKED
A special data type used to indicate data that has been packed with MPI_Pack(IntPtr, Int32, Int32, IntPtr, Int32, Int32%, Int32). This type is only used by the lowest-level MPI operations. The .NET equivalent is the DatatypeCache..::.Packed type.
MPI_PROC_NULL
Special value for the source or dest argument to any communication operation, which indicates that the communication is a no-op. Not supported in MPI.NET.
MPI_PROD
Compute the product via an MPI reduction operation. See Multiply
MPI_REQUEST_NULL
Constant that indicates a "null" MPI request, meaning that there is no such request.
MPI_SHORT
A signed short integer. This is equivalent to the short type in C# and System.Int16 in .NET.
MPI_SIGNED_CHAR
A single, signed character. This is equivalent to the sbyte type in C# and the System.SByte type in .NET.
MPI_SIMILAR
Constant used in comparisons of MPI objects to denote that two objects are similar, but assign different ranks to each of the processes. See Similar.
MPI_STATUS_IGNORE
Constant used to indicate that the MPI_Status argument of an MPI operation will be ignored.
MPI_STATUSES_IGNORE
Constant used to indicate that the array of MPI_Status arguments to an MPI operation will be ignored.
MPI_SUCCESS
Error value indicating no error.
MPI_SUM
Compute the sum via an MPI reduction operation. See Add
MPI_TAG_UB
Predefined attribute key that can be used to determine the maximum tag value that users are allowed to provide to a communication request. See MaxTag.
MPI_THREAD_FUNNELED
Indicates that the MPI program is multi-threaded, but all MPI operations will be called from the main thread. See Funneled.
MPI_THREAD_MULTIPLE
Indicates that the MPI program is multi-threaded, and any thread can call into MPI at any time. See Multiple.
MPI_THREAD_SERIALIZED
Indicates that the MPI program is multi-threaded, but only one thread will call into MPI at any given time. See Serialized.
MPI_THREAD_SINGLE
Indicates that the MPI program is single-threaded. See Single.
MPI_UNDEFINED
"Undefined" value used to identify when a rank is not a part of a group. See NoProcess.
MPI_UNEQUAL
Constant used in comparisons of MPI objects to denote that two objects are completely different. See Unequal.
MPI_UNSIGNED
An unsigned integer. This is equivalent to the uint type in C# and System.UInt32 in .NET.
MPI_UNSIGNED_CHAR
A single, unsigned character. There is no equivalent to this type in C# or .NET.
MPI_UNSIGNED_LONG
A long unsigned integer. There is no equivalent in C# or .NET, because the 64-bit unsigned integer in C# and .NET is mapped to MPI_UNSIGNED_LONG_LONG.
MPI_UNSIGNED_LONG_LONG
A long long unsigned integer. The equivalent is ulong in C# and System.UInt64 in .NET.
MPI_UNSIGNED_SHORT
An unsigned short integer. This is equivalent to the ushort type in C# and System.UInt16 in .NET.
MPI_WCHAR
A single, wide character. The equivalent is char in C# and System.Char in .NET.
MPI_WTIME_IS_GLOBAL
Predefined attribute key that can be used to determine whether the clocks (accessed via MPI_Wtime()()()) are synchronized across all processes. See IsTimeGlobal.

See Also