TclMPI  1.2
Tcl Bindings for MPI
TclMPI User's Guide

This page describes Tcl bindings for MPI. This package provides a shared object that can be loaded into a Tcl interpreter to provide additional commands that act as an interface to an underlying MPI implementation. This allows to run Tcl scripts in parallel via mpirun or mpiexec similar to C, C++ or Fortran programs and communicate via wrappers to MPI function call.

The original motivation for writing this package was to complement a Tcl wrapper for the LAMMPS molecular dynamics simulation software, but also allow using the VMD molecular visualization and analysis package in parallel without having to recompile VMD and using an API that is convenient to people that already know how to program parallel programs with MPI in C, C++ or Fortran. It has since been adopted to provide an MPI wrapper for the OpenSees software: https://github.com/ambaker1/OpenSeesMPI

Pre-compiled Binary Packages

While it is usually expected that MPI based parallel applications are compiled from source code using the target machine's local MPI implementation, that is not always convenient or necessary. This applies for example to the Windows platform or Linux distributions where mechanisms are in place to check that are pre-requisites are installed and binaries are compatible. The TclMPI homepage has links to available binaries and information about how to install them as they become available.

Compilation

The package currently consist of a single C source file which usually will be compiled for dynamic linkage, but can also be compiled into a new Tcl interpreter with TclMPI included (required on some platforms that require static linkage) and a Tcl script file. In addition the package contains some examples, a simple unit test harness (implemented in Tcl) and a set of tests to be run with either one MPI rank (test01, test02) or two MPI ranks (test03, test04).

The build system uses CMake (version 3.16 or later) and has been confirmed to work on Linux, macOS, and Windows using a variety of C compilers (GNU, Clang, Intel, PGI, MSVC). You need to have both, Tcl and MPI installed including their respective development support packages (sometimes called SDK). The MPI library has to be at least MPI-2 standard compliant and the Tcl version should be 8.6 or later. When compiled for a dynamically loaded shared object (DSO) or DLL file, the MPI library has to be compiled and linked with support for building shared libraries as well.

To configure and build TclMPI you need to run CMake the usual way, in a console window with with:

cmake -B build-folder -S .
cmake --build build-folder
cmake --install build-folder

There are a few settings that can be used to adjust what is compiled and installed and where. The following settings are supported:

  • BUILD_TCLMPI_SHELL Build a tclmpish executable as extended Tcl shell (default: on)
  • ENABLE_TCL_STUBS Use the Tcl stubs mechanism (default: on, requires Tcl 8.6 or later)
  • CMAKE_INSTALL_PREFIX Path to installation location prefix (default: (platform specific))
  • BUILD_TESTING Enable unit testing (default: on)
  • DOWNLOAD_MPICH4WIN Download MPICH2-1.4.1 headers and link library (default: off, only supported when cross-compiling on Linux for Windows)

To change settings from the defaults append -D<SETTING>=<VALUE> to the cmake command line and replace <SETTING> and <VALUE> accordingly or you may use the ccmake text mode UI or cmake-gui.

Building the Documentation

Documentation in HTML and PDF format is extracted from the sources using doxygen, if available. The build of the HTML format documentation is requested with

cmake --build build-folder --target html

The documentation will be in folder build-folder/html. To generate the PDF documentation, PDFLaTeX and several LaTeX style packages need to be installed. This is requested using

cmake --build build-folder --target pdf

and the resulting documentation will be in build-folder/tclmpi_docs.pdf.

Installation

To install the TclMPI package you can use

cmake --build build-folder --target install

which should by default install the compiled shared object and the associated two Tcl files into a subfolder of <CMAKE_INSTALL_PREFIX>/tcl8.6. The default value of CMAKE_INSTALL_PREFIX is system specific, but it can changed with -D CMAKE_INSTALL_PREFIX=/some/path when configuring with CMake, then the installation will be into the corresponding location.

To tell Tcl where to find the package, you need to either set or expand the TCLLIBPATH environment variable to the folder into which you have installed the files or place auto_path [concat /usr/local/tcl8.6/ $auto_path] at the beginning of your Tcl script or in your .tclshrc file (or .vmdrc or similar). Then you should be able to load the TclMPI wrappers on demand by using the command package require tclmpi.

For the extended Tcl shell tclmpish, the _tclmpi.so file is not used and instead tclmpish already includes the coresponding code and needs to be run instead of tclsh. For that you may append the bin folder of the installation tree to your PATH environment variable. In case of using the custom Tcl shell, the startup script would be called .tclmpishrc instead of .tclshrc.

Software Development and Bug Reports

The TclMPI code is maintained using git for source code management, and the project is hosted on github at https://github.com/akohlmey/tclmpi From there you can download snapshots of the development and releases, clone the repository to follow development, or work on your own branches after forking it. Bug reports and feature requests should also be filed on github at through the issue tracker at: https://github.com/akohlmey/tclmpi/issues.

Example Programs

The following section provides some simple examples using TclMPI to recreate some common MPI example programs in Tcl.

Hello World

This is the TclMPI version of "hello world".

#!/usr/bin/env tclsh
package require tclmpi 1.2
# initialize MPI
::tclmpi::init
# get size of communicator and rank of process
set comm tclmpi::comm_world
set size [::tclmpi::comm_size $comm]
set rank [::tclmpi::comm_rank $comm]
puts "hello world, this is rank $rank of $size"
# shut down MPI
::tclmpi::finalize
exit 0

Computation of Pi

This script uses TclMPI to compute the value of Pi from numerical quadrature of the integral:

\[ \pi = \int^1_0 {\frac{4}{1 + x^2}} dx \]

#!/usr/bin/env tclsh
package require tclmpi 1.2
# initialize MPI
::tclmpi::init
set comm tclmpi::comm_world
set size [::tclmpi::comm_size $comm]
set rank [::tclmpi::comm_rank $comm]
set master 0
set num [lindex $argv 0]
# make sure all processes have the same interval parameter
set num [::tclmpi::bcast $num ::tclmpi::int $master $comm]
# run parallel calculation
set h [expr {1.0/$num}]
set sum 0.0
for {set i $rank} {$i < $num} {incr i $size} {
set sum [expr {$sum + 4.0/(1.0 + ($h*($i+0.5))**2)}]
}
set mypi [expr {$h * $sum}]
# combine and print results
set mypi [::tclmpi::allreduce $mypi tclmpi::double \
tclmpi::sum $comm]
if {$rank == $master} {
set rel [expr {abs(($mypi - 3.14159265358979)/3.14159265358979)}]
puts "result: $mypi. relative error: $rel"
}
# shut down MPI
::tclmpi::finalize
exit 0

Distributed Sum

This is a small example version that distributes a data set and computes the sum across all elements in parallel.

#!/usr/bin/env tclsh
package require tclmpi 1.2
# data summation helper function
proc sum {data} {
set sum 0
foreach d $data {
set sum [expr {$sum + $d}]
}
return $sum
}
::tclmpi::init
set comm $tclmpi::comm_world
set mpi_sum $tclmpi::sum
set mpi_double $tclmpi::double
set mpi_int $tclmpi::int
set size [::tclmpi::comm_size $comm]
set rank [::tclmpi::comm_rank $comm]
set master 0
# The master creates the list of data
#
set dataSize 1000000
set data {}
if { $comm == $master } {
set mysum 0
for { set i 0 } { $i < $dataSize } { incr i } {
lappend data $i
}
}
# add padding, so the number of data elements is divisible
# by the number of processors as required by tclmpi::scatter
set needpad [expr {$dataSize % $size}]
set numpad [expr {$needpad ? ($size - $needpad) : 0}]
if { [comm_rank $comm] == $master } {
for {set i 0} {$i < $numpad} {incr i} {
lappend data 0
}
}
set blocksz [expr {($dataSize + $numpad)/ $size}]
# distribute data and do the summation on each node
# the sum the result across all nodes. Note: the data
# is integer, but we need to do the full sum in double
# precison to avoid overflows.
set mydata [::tclmpi::scatter $data $mpi_int $master $comm]
set sum [::tclmpi::allreduce [sum $mydata] $mpi_double $mpi_sum $comm]
if { $comm == $master } {
puts "Distributed sum: $sum"
}
::tclmpi::finalize

TclMPI Tcl command reference

All TclMPI Tcl commands are placed into the tclmpi namespace.