[This is preliminary documentation and is subject to change.]

The Unsafe type exposes the following methods.

Methods

  NameDescription
ComparisonFromInt
Converts the intermediate of an MPI_*_compare operation into a Comparison enum value.
Equals
Determines whether the specified Object is equal to the current Object.
(Inherited from Object.)
Finalize
Allows an Object to attempt to free resources and perform other cleanup operations before the Object is reclaimed by garbage collection.
(Inherited from Object.)
GetHashCode
Serves as a hash function for a particular type. GetHashCode()()() is suitable for use in hashing algorithms and data structures like a hash table.
(Inherited from Object.)
GetType
Gets the Type of the current instance.
(Inherited from Object.)
MemberwiseClone
Creates a shallow copy of the current Object.
(Inherited from Object.)
MPI_Abort
Aborts the current MPI program.
MPI_Address
Converts a pointer into an address for use with MPI. In many cases, this operation is simply a cast from the pointer's value to an integer.
MPI_Allgather
Gather the values provided by each process into an array containing the contributions of all of the processes. This operation is equivalent to a MPI_Gather(IntPtr, Int32, Int32, IntPtr, Int32, Int32, Int32, Int32) to an arbitrary root followed by an MPI_Bcast(IntPtr, Int32, Int32, Int32, Int32) from that root. See Allgather<(Of <(T>)>)(T)
MPI_Allgatherv
Gather the values provided by each process into an array containing the contributions of all of the processes. This operation differs from MPI_Allgather(IntPtr, Int32, Int32, IntPtr, Int32, Int32, Int32) in that it permits different processes to provide a different number of elements to be gathered. See Allgather<(Of <(T>)>)(T)
MPI_Allreduce
Perform a parallel reduction operation that summarizes the results from the input provided by all of the processes in the communicator. Semantically, this is equivalent to an MPI_Reduce(IntPtr, IntPtr, Int32, Int32, Int32, Int32, Int32) to an arbitrary root followed by an MPI_Bcast(IntPtr, Int32, Int32, Int32, Int32) from that process. See Allreduce<(Of <(T>)>)(T, ReductionOperation<(Of <(T>)>))
MPI_Alltoall
Transmits data from every process in a communicator to every other process in the communicator. Similar to MPI_Allgather(IntPtr, Int32, Int32, IntPtr, Int32, Int32, Int32), except that each process can send different data to every other process. To send a different amount of data to each process, use MPI_Alltoallv(IntPtr, Int32*, Int32*, Int32, IntPtr, Int32*, Int32*, Int32, Int32) or MPI_Alltoallw(IntPtr, Int32*, Int32*, Int32*, IntPtr, Int32*, Int32*, Int32*, Int32). See Alltoall<(Of <(T>)>)(array<T>[]()[]).
MPI_Alltoallv
Transmits data from every process in a communicator to every other process in the communicator. Similar to MPI_Allgatherv(IntPtr, Int32, Int32, IntPtr, Int32*, Int32*, Int32, Int32), except that each process can send different data to every other process. If all of your processes send the same amount of data to each other, use the simpler MPI_Alltoall(IntPtr, Int32, Int32, IntPtr, Int32, Int32, Int32); if you need the data sent to different processes to have different datatypes, use MPI_Alltoallw(IntPtr, Int32*, Int32*, Int32*, IntPtr, Int32*, Int32*, Int32*, Int32). See Alltoall<(Of <(T>)>)(array<T>[]()[]).
MPI_Alltoallw
Transmits data from every process in a communicator to every other process in the communicator. Similar to MPI_Allgatherv(IntPtr, Int32, Int32, IntPtr, Int32*, Int32*, Int32, Int32), except that each process can send different data to every other process. If all of your processes send the same amount of data to each other, use the simpler MPI_Alltoall(IntPtr, Int32, Int32, IntPtr, Int32, Int32, Int32); if the volume of data sent to each process can be different but all of the data has the same type, use MPI_Alltoallv(IntPtr, Int32*, Int32*, Int32, IntPtr, Int32*, Int32*, Int32, Int32). See Alltoall<(Of <(T>)>)(array<T>[]()[]).
MPI_Barrier
A synchronization barrier where no processor leaves the barrier until all processors have entered the barrier. See Barrier()()().
MPI_Bcast
Broadcast a value from the root process to every process within the communication. See Broadcast<(Of <(T>)>)(T%, Int32).
MPI_Cancel
Cancel an outstanding MPI communication request. See Cancel()()().
MPI_Comm_compare
Compare two communicators. See Compare(Communicator).
MPI_Comm_create
Creates a new communicator from a subgroup of the processes in an existing communicator. See Create(Group).
MPI_Comm_dup
Duplicates a communicator, creating a new communicator with the same processes and ranks as the original. See Clone()()().
MPI_Comm_free
Frees a communicator. This routine will be invoked automatically by Dispose()()() or the finalizer for Communicator.
MPI_Comm_group
Retrieve the group associated with a communicator. See Group.
MPI_Comm_rank
Determines the rank of the calling process in the communicator. See Rank.
MPI_Comm_remote_group
Retrieves the remote group from an intercommunicator. See RemoteGroup.
MPI_Comm_remote_size
Determines the number of processes in the remote group of an intercommunicator. See RemoteSize.
MPI_Comm_size
Determines the number of processes in the communicator. See Size.
MPI_Comm_split
Splits a communicator into several new communicators, based on the colors provided. See Split(Int32, Int32).
MPI_Comm_test_inter
Determine whether a communicator is an intercommunicator. In MPI.NET, intercommunicators will have type Intercommunicator.
MPI_Exscan
Performs a partial exclusive reduction on the data, returning the result from combining the data provided by the first P-1 processes to the process with rank P. See ExclusiveScan<(Of <(T>)>)(T, ReductionOperation<(Of <(T>)>))
MPI_Finalize
Finalizes (shuts down) MPI. This routine must be called before exiting the program. It will be invoked by Dispose()()().
MPI_Finalized
Determine whether MPI has already been finalized. See Environment..::.Finalized.
MPI_Gather
Gather the values provided by each process into an array containing the contributions of all of the processes. This routine differs from MPI_Allgather(IntPtr, Int32, Int32, IntPtr, Int32, Int32, Int32) in that the results are gathered to only the "root" process, which is identified by its rank in the communicator. See Gather<(Of <(T>)>)(T, Int32)
MPI_Gatherv
Gather the values provided by each process into an array containing the contributions of all of the processes. This routine differs from MPI_Allgather(IntPtr, Int32, Int32, IntPtr, Int32, Int32, Int32) in that the results are gathered to only the "root" process, which is identified by its rank in the communicator. See Gather<(Of <(T>)>)(T, Int32)
MPI_Get_count
Determine the number of elements transmitted by a communication operation. See Count(Type).
MPI_Get_processor_name
Retrieve the name of the processor or compute node that is currently executing. See ProcessorName.
MPI_Group_compare
Compare two groups. See Compare(Group).
MPI_Group_difference
Create a group from the difference of two groups. See Subtraction(Group, Group).
MPI_Group_excl
Create a subgroup containing all processes in existing group except those specified. See Exclude(array<Int32>[]()[]).
MPI_Group_free
Frees a group. This routine will be invoked automatically by Dispose()()() or the finalizer for Group.
MPI_Group_incl
Create a subgroup containing the processes with specific ranks in an existing group. See IncludeOnly(array<Int32>[]()[]).
MPI_Group_intersection
Create a group from the intersection of two groups. See BitwiseAnd(Group, Group).
MPI_Group_range_excl
Create a subgroup of processes containing all of the processes in the source group except those described by one of th provided(first, last, stride) rank triples. Note: this precise functionality is not exposed directly in the normal MPI layer; however, the same semantics can be attained with Exclude(array<Int32>[]()[]).
MPI_Group_range_incl
Create a subgroup of processes in a group, based on a set of (first, last, stride) rank triples. Note: this precise functionality is not exposed directly in the normal MPI layer; however, the same semantics can be attained with IncludeOnly(array<Int32>[]()[]).
MPI_Group_rank
Determine the rank of the calling process in a group. See Rank.
MPI_Group_size
Determine the number of processes in a group. See Size.
MPI_Group_translate_ranks
Translate the ranks of processes in one group into those processes' corresponding ranks in another group. See TranslateRanks(array<Int32>[]()[], Group).
MPI_Group_union
Create a group from the union of two groups. See BitwiseOr(Group, Group).
MPI_Init
Initializes MPI. This routine must be called before any other MPI routine. It will be invoked by the Environment constructor.
MPI_Init_thread
Initializes the MPI library with thread support. This operation subsumes MPI_Init(Int32*, Byte***). See Environment(array<String>[]()[]%, Threading).
MPI_Initialized
Determine whether MPI has already been initialized. See Environment..::.Initialized.
MPI_Intercomm_create
Create a new intercommunicator from two disjoint intracommunicators. See Intercommunicator(Intracommunicator, Int32, Intracommunicator, Int32, Int32).
MPI_Intercomm_merge
Merge the two groups in an intercommunicator into a single intracommunicator. See Merge(Boolean)
MPI_Iprobe
Test whether a message is available. See ImmediateProbe(Int32, Int32).
MPI_Irecv
A non-blocking receive that posts the intent to receive a value. The actual receive will be completed when the corresponding request is completed.
MPI_Is_thread_main
Determine whether the calling thread is the main MPI thread (that called MPI_Init(Int32*, Byte***) or MPI_Init_thread(Int32*, Byte***, Int32, Int32*). See IsMainThread.
MPI_Isend
An immediate (non-blocking) point-to-point send. See ImmediateSend<(Of <(T>)>)(T, Int32, Int32).
MPI_Op_create
Creates an MPI operation that invokes a user-provided function. The MPI operation can be used with various reduction operations. MPI.NET provides support for user-defined operations via the Operation<(Of <(T>)>) class.
MPI_Op_free
Frees an MPI operation created via MPI_Op_create(IntPtr, Int32, Int32*). MPI.NET will automatically manage any operations it creates via Operation<(Of <(T>)>) when the corresponding object is disposed of or finalized.
MPI_Pack
Packs (serializes) data into a byte buffer. This serialized representation can be transmitted via MPI with the datatype MPI_PACKEDand unpacked with MPI_Unpack(IntPtr, Int32, Int32*, IntPtr, Int32, Int32, Int32). Serialization in MPI.NET is automatic, so this routine is very rarely used.
MPI_Pack_size
Determine the maximum amount of space that packing incount values with the MPI datatype datatype will require. This routine is useful for allocating buffer space when packing data with MPI_Pack(IntPtr, Int32, Int32, IntPtr, Int32, Int32*, Int32).
MPI_Probe
Wait until a message is available. See Probe(Int32, Int32).
MPI_Query_thread
Determine the level of threading support provided by the MPI library.
MPI_Recv
Receive a message from another process within the communicator. See Receive<(Of <(T>)>)(Int32, Int32).
MPI_Reduce
Perform a parallel reduction operation that summarizes the results from the data contributed by all of the processes in a communicator. Unlike MPI_Allreduce(IntPtr, IntPtr, Int32, Int32, Int32, Int32), the results of this operation are returned only to the process whose rank is equivalent to root, i.e., the "root" process. See Reduce<(Of <(T>)>)(T, ReductionOperation<(Of <(T>)>), Int32)
MPI_Reduce_scatter
The equivalent of a MPI_Reduce(IntPtr, IntPtr, Int32, Int32, Int32, Int32, Int32) followed by a MPI_Scatterv(IntPtr, Int32*, Int32*, Int32, IntPtr, Int32, Int32, Int32, Int32), performing a reduction on the data provided in sendbuf and then scattering those results to all of the processes. See ReduceScatter<(Of <(T>)>)(array<T>[]()[], ReductionOperation<(Of <(T>)>), array<Int32>[]()[]).
MPI_Request_free
Free the resources associated with a request.
MPI_Scan
Performs a partial reduction on the data, returning the result from combining the data provided by the first P processes to the process with rank P. See Scan<(Of <(T>)>)(T, ReductionOperation<(Of <(T>)>))
MPI_Scatter
Scatters data from one process (the "root" process) to all of the processes in a communicator, with different parts of the data going to different processes. See Scatter<(Of <(T>)>)(array<T>[]()[], Int32).
MPI_Scatterv
Scatters data from one process (the "root" process) to all of the processes in a communicator, with different parts of the data going to different processes. Unlike MPI_Scatter(IntPtr, Int32, Int32, IntPtr, Int32, Int32, Int32, Int32), different processes may receive different amounts of data. See Scatter<(Of <(T>)>)(array<T>[]()[], Int32).
MPI_Send
Send a message to another process within the communicator. See Send<(Of <(T>)>)(T, Int32, Int32).
MPI_Test
Test whether the given request has completed. See Test()()().
MPI_Test_cancelled
Determine whether a particular communication operation was cancelled. See Cancelled.
MPI_Testall
Test whether all of the given MPI requests have been completed. See TestAll()()().
MPI_Testany
Test whether any of the MPI requests has completed. See TestAny()()().
MPI_Testsome
Providing a list of all of the requests that have completed, without waiting for any requests to complete. See TestSome()()().
MPI_Type_commit
Completes creation of an MPI datatype. This routine will be called automatically when the MPI datatype is being generated via reflection in GetDatatype(Type).
MPI_Type_contiguous
Creates a new datatype from a contiguous block of values of the same type.
MPI_Type_extent
Determines the extent of the datatype.
MPI_Type_free
Frees an MPI datatype.
MPI_Type_hindexed
Creates a new datatype from discontiguous blocks of values of the same type.
MPI_Type_hvector
Creates a new datatype from a strided block of values of the same type.
MPI_Type_indexed
Creates a new datatype from discontiguous blocks of values of the same type.
MPI_Type_size
Computes the size of a datatype.
MPI_Type_struct
Creates a new datatype from a structure containing discontiguous blocks of different types. This is the most general type constructor, and is used by the DatatypeCache to create MPI datatypes from .NET value types.
MPI_Type_vector
Creates a new datatype from a strided block of values of the same type.
MPI_Unpack
Unpacks (deserializes) data from a byte buffer. The serialized representation will have been packed by MPI_Pack(IntPtr, Int32, Int32, IntPtr, Int32, Int32*, Int32) and (possibly) transmitted via MPI using the datatype MPI_PACKED. Serialization in MPI.NET is automatic, so this routine is very rarely used.
MPI_Wait
Wait until the given request has completed. See Wait()()().
MPI_Waitall
Wait until all of the given MPI requests have completed before returning. See WaitAll()()().
MPI_Waitany
Waits until any of the given MPI requests completes before returning. See WaitAny()()().
MPI_Waitsome
Wait until some MPI requests have completed, then provide a list of all of the requests that have completed. See WaitSome()()().
MPI_Wtime
Returns a floating point number of seconds, since some time in the past
ToString
Returns a String that represents the current Object.
(Inherited from Object.)

See Also