Functions | |
int | TclMPI_Initialized (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Finalized (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Init (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Conv_set (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Conv_get (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Finalize (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Abort (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Comm_size (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Comm_rank (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Comm_split (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Comm_free (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Barrier (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Bcast (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Scatter (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Allgather (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Gather (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Allreduce (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Reduce (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Send (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Isend (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Recv (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Irecv (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Probe (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Iprobe (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int | TclMPI_Wait (ClientData nodata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]) |
int TclMPI_Abort | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Abort()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function translates the Tcl string representing a communicator into the corresponding MPI communicator and then calls MPI_Abort().
int TclMPI_Allgather | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Allgather()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function implements a gather operation that collects data for TclMPI. This operation does not accept the tclmpi::auto data type, also support for types outside of tclmpi::int and tclmpi::double is incomplete. The length of the data is inferred from the data object passed to this function and thus a 'count' argument is not needed. The number of data items has to be the same on all processes on the communicator.
The result is converted back into Tcl objects and passed up as result value to the calling Tcl code on all processors. If the MPI call failed, an MPI error message is passed up as result instead.
int TclMPI_Allreduce | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Allreduce()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function implements a reduction plus broadcast function for TclMPI. This operation does not accept the tclmpi::auto data type, also support for types outside of tclmpi::int and tclmpi::double is incomplete. The length of the data is inferred from the data object passed to this function and thus a 'count' argument is not needed.
The result is converted back into Tcl objects and passed up as result value to the calling Tcl code. If the MPI call failed, an MPI error message is passed up as result instead.
int TclMPI_Barrier | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Barrier()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function translates the Tcl string representing a communicator into the corresponding MPI communicator and then calls MPI_Barrier(). If the MPI call failed, an MPI error message is passed up as result.
int TclMPI_Bcast | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Bcast()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function implements a broadcast function for TclMPI. Unlike in the C bindings, the length of the data is inferred from the data object passed to this function and thus a 'count' argument is not needed. Only a limited number of data types are currently supported, since Tcl has a limited number of "native" data types. The tclmpi::auto data type transfers the internal string representation of an object, while the other data types convert data to native data types as needed, with all non-representable data translated into either 0 or 0.0. In all cases, two broadcasts are needed. The first to transmit the amount of data being sent so that a suitable receive buffer can be set up.
The result of the broadcast is converted back into Tcl objects and passed up as result value to the calling Tcl code. If the MPI call failed, an MPI error message is passed up as result instead.
int TclMPI_Comm_free | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Comm_free()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function deletes a defined MPI communicator and removes its Tcl representation from the local translation tables.
int TclMPI_Comm_rank | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Comm_rank()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function translates the Tcl string representing a communicator into the corresponding MPI communicator and then calls MPI_Comm_rank() on it. The resulting number is passed to Tcl as result or the MPI error message is passed up similarly.
int TclMPI_Comm_size | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Comm_size()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function translates the Tcl string representing a communicator into the corresponding MPI communicator and then calls MPI_Comm_size() on it. The resulting number is passed to Tcl as result or the MPI error message is passed up similarly.
int TclMPI_Comm_split | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Comm_split()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function translates the Tcl string representing a communicator into the corresponding MPI communicator also checks and converts the values for 'color' and 'key' and then calls MPI_Comm_split(). The resulting communicator is added to the internal communicator map linked list and its string representation is passed to Tcl as result. If the MPI call failed, the MPI error message is passed up similarly.
int TclMPI_Conv_get | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
Get error handler string for data conversions in TclMPI
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function returns which error handler is currently active for data conversions in TclMPI. For details see TclMPI_Conv_set().
There is no equivalent MPI function for this, since there are no data conversions in C or C++.
int TclMPI_Conv_set | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
Set error handler for data conversions in TclMPI
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function sets what action TclMPI should take if a conversion of a data element to the requested integer or double data type fails. There are currently three handlers implemented: TCLMPI_ERROR, TCLMPI_ABORT, and TCLMPI_TOZERO.
For TCLMPI_ERROR a Tcl error is raised and TclMPI returns to the calling function. For TCLMPI_ABORT an error message is written to the error output and parallel execution on the current communicator is terminated via MPI_Abort(). For TCLMPI_TOZERO the error is silently ignored and the data element set to zero.
There is no equivalent MPI function for this, since there are no data conversions in C or C++.
int TclMPI_Finalize | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Finalize()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function does a little more than just calling MPI_Finalize(). It also tries to detect whether MPI_Init() or MPI_Finialize() have been called before (from Tcl) and then creates a (catchable) Tcl error instead of an (uncatchable) MPI error.
int TclMPI_Finalized | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Finalized()
This function checks whether the MPI environment has been shut down.
int TclMPI_Gather | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Gather()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function implements a gather operation that collects data for TclMPI. This operation does not accept the tclmpi::auto data type, also support for types outside of tclmpi::int and tclmpi::double is incomplete. The length of the data is inferred from the data object passed to this function and thus a 'count' argument is not needed. The number of data items has to be the same on all processes on the communicator.
The result is converted back into Tcl objects and passed up as result value to the calling Tcl code on the root processor. If the MPI call failed, an MPI error message is passed up as result instead.
int TclMPI_Init | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Init()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function does a little more work than just calling MPI_Init(). First of it tries to detect whether MPI_Init() has been called before (from Tcl) and then creates a (catchable) Tcl error instead of an (uncatchable) MPI error. It will also try to pass the argument vector to the script from the Tcl generated 'argv' array to the underlying MPI_Init() call and reset argv as needed.
int TclMPI_Initialized | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Initialized()
This function checks whether the MPI environment has been initialized.
int TclMPI_Iprobe | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Iprobe()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function implements a non-blocking probe operation for TclMPI. Argument flags for source, tag, and communicator are translated into their native MPI equivalents and then MPI_Iprobe called.
Similar to MPI_Probe, generating a status object to inspect the pending receive is optional. If desired, the argument is taken as a variable name which will then be generated as associative array with several entries similar to what MPI_Status contains. Those are source, tag, error status and count, however this is directly provided as multiple entries translated to char, int and double data types (COUNT_CHAR, COUNT_INT, COUNT_DOUBLE).
The status flag in MPI_Iprobe that returns true if a request is pending will be passed to the calling routine as Tcl result.
int TclMPI_Irecv | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Iecv()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function implements a non-blocking receive operation for TclMPI. Since the length of the data object is supposed to be automatically adjusted to the amount of data being sent, this function needs to be more complex than just a simple wrapper around the corresponding MPI C bindings. It will first call tclmpi_add_req to generate a new entry to the list of registered MPI requests. It will then call MPI_Iprobe to see if a matching send is already in progress and thus the necessary amount of storage required can be inferred from the MPI_Status object that is populated by MPI_Iprobe. If yes, a temporary receive buffer is allocated and the non-blocking receive is posted and all information is transferred to the tclmpi_req_t object. If not, only the arguments of the receive call are registered in the request object for later use. The command will pass the Tcl string that represents the generated MPI request to the Tcl interpreter as return value. If the MPI call failed, an MPI error message is passed up as result instead and a Tcl error is indicated.
int TclMPI_Isend | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Isend()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function implements a non-blocking send operation for TclMPI. The length of the data is inferred from the data object passed to this function and thus a 'count' argument is not needed. Unlike for the blocking TclMPI_Send, in the case of tclmpi::auto as data a copy has to be made since the string representation of the send data might be invalidated during the send. The command generates a new tclmpi_req_t communication request via tclmpi_add_req and the pointers to the data buffer and the MPI_Request info generated by MPI_Isend is stored in this request list entry for later perusal, see TclMPI_Wait. The generated string label representing this request will be passed on to the calling program as Tcl result. If the MPI call failed, an MPI error message is passed up as result instead and a Tcl error is indicated.
int TclMPI_Probe | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Probe()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function implements a blocking probe operation for TclMPI. Argument flags for source, tag, and communicator are translated into their native MPI equivalents and then MPI_Probe called.
Similar to MPI_Probe, generating a status object to inspect the pending receive is optional. If desired, the argument is taken as a variable name which will then be generated as associative array with several entries similar to what MPI_Status contains. Those are source, tag, error status and count, however this is directly provided as multiple entries translated to char, int and double data types (COUNT_CHAR, COUNT_INT, COUNT_DOUBLE).
int TclMPI_Recv | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Recv()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function implements a blocking receive operation for TclMPI. Since the length of the data object is supposed to be automatically adjusted to the amount of data being sent, this function will first call MPI_Probe to identify the amount of storage needed from the MPI_Status object that is populated by MPI_Probe. Then a temporary receive buffer is allocated and then converted back to Tcl objects according to the data type passed to the receive command. Due to this deviation from the MPI C bindings a 'count' argument is not needed. This command returns the received data to the calling procedure. If the MPI call failed, an MPI error message is passed up as result instead and a Tcl error is indicated.
int TclMPI_Reduce | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Reduce()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function implements a reduction function for TclMPI. This operation does not accept the tclmpi::auto data type, also support for types outside of tclmpi::int and tclmpi::double is incomplete. The length of the data is inferred from the data object passed to this function and thus a 'count' argument is not needed.
The result is collected on the process with rank root and converted back into Tcl objects and passed up as result value to the calling Tcl code. If the MPI call failed an MPI error message is passed up as result instead.
int TclMPI_Scatter | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Scatter()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function implements a scatter operation that distributes data for TclMPI. This operation does not accept the tclmpi::auto data type, also support for types outside of tclmpi::int and tclmpi::double is incomplete. The length of the data is inferred from the data object passed to this function and thus a 'count' argument is not needed. The number of data items has to be divisible by the number of processes on the communicator.
The result is converted back into Tcl objects and passed up as result value to the calling Tcl code. If the MPI call failed an MPI error message is passed up as result instead.
int TclMPI_Send | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Send()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function implements a blocking send operation for TclMPI. The length of the data is inferred from the data object passed to this function and thus a 'count' argument is not needed. In the case of tclmpi::auto, the string representation of the send data is directly passed to MPI_Send() otherwise a copy is made and data converted.
If the MPI call failed, an MPI error message is passed up as result instead and a Tcl error is indicated, otherwise nothing is returned.
int TclMPI_Wait | ( | ClientData | nodata, |
Tcl_Interp * | interp, | ||
int | objc, | ||
Tcl_Obj *const | objv[] | ||
) |
wrapper for MPI_Wait()
nodata | ignored |
interp | current Tcl interpreter |
objc | number of argument objects |
objv | list of argument object |
This function implements a wrapper around MPI_Wait for TclMPI. Due to the design decisions in TclMPI, it works a bit different than MPI_Wait, particularly for non-blocking receive requests. As explained in the TclMPI_Irecv documentation, the corresponding MPI_Irecv may not yet have been posted, so we have to first inspect the tclmpi_req_t object, if the receive still needs to be posted. If yes, then we need to do about the same procedure as for a blocking receive, i.e. call MPI_Probe to determine the size of the receive buffer, allocate that buffer and then post a blocking receive. If no, we call MPI_Wait to wait until the non-blocking receive is completed. In both cases, the result needed to be converted to Tcl objects and passed to the calling procedure as Tcl return values. Then the receive buffers can be deleted and the tclmpi_req_t entry removed from it translation table.
For non-blocking send requests, MPI_Wait is called and after completion the send buffer freed and the tclmpi_req_t data released. The MPI spec allows to call MPI_Wait on non-existing MPI_Requests and just return immediately. This is handled directly without calling MPI_Wait, since we cache all generated MPI requests.