The first function call of our program is
MPI_Cart_create ( MPI_COMM_WORLD, PROCESS_DIMENSIONS, divisions, periods, reorder, &cartesian_communicator );This function, discussed on page 177, section 6.5.1 of ``MPI: A Message-Passing Interface Standard'', creates a new communicator to which the Cartesian topology information is attached. The new communicator is cut out of the old one,
MPI_COMM_WORLD
in our
case. The Cartesian structure will have PROCESS_DIMENSIONS
number of dimensions. In our case PROCESS_DIMENSIONS
is 2, so,
we'll create a two-dimensional lattice of processes. The shape of
the lattice is conveyed in the array divisions
, which in
our case is
{PROCESS_ROWS, PROCESS_COLUMNS} = {4, 3}In other words our lattice of processes will be 4 x 3.
We also specify in this case that we don't want the mesh to be wrapped
up like a torus. The mesh is not periodic. The array periods
has been set to {0, 0}
.
MPI may reorder the ranking of processes in the new
cartesian_communicator
in that would improve the communication
within the new topology. By setting reorder
to 1, we are
saying that
we don't mind if MPI does just that.
Personally, I would much rather see variables of type boolean_t
in place of periods
and reorder
, but, in spite of a
heroic stance put up by judge Stanley Sporkin, not every C runs under
POSIX yet.
Our cartesian_communicator
has place for 12 processes. Since
there are 13 CPUs provided by our farm, one of the processes must be
dropped. For that one process cartesian_communicator
will be MPI_COMM_NULL
.
It would be an error to perform any operations related to
cartesian_communicator
within a process which didn't make
it to the new pool. For this reason the remainder of the program
is enclosed in one large if
statement:
if (cartesian_communicator != MPI_COMM_NULL) { ... }
You must make absolutely sure here that the rejected process is not the master process responsible for the communication! I actually don't take any precautions as to this point within this program, because the somewhat increased complexity would obscure the basic idea behind the topic of this lesson. But the program could be restructured in order to take care of that possibility.
Within the body of if
we declare variables which are of
use to the surviving processes only. Each process finds about
its new rank within the cartesian_communicator
, about
its Cartesian coordinates within the grid, and about its neighbours:
MPI_Comm_rank ( cartesian_communicator, &my_cartesian_rank ); MPI_Cart_coords ( cartesian_communicator, my_cartesian_rank, PROCESS_DIMENSIONS, my_coordinates ); MPI_Cart_shift ( cartesian_communicator, SIDEWAYS, RIGHT, &left_neighbour, &right_neighbour ); MPI_Cart_shift ( cartesian_communicator, UPDOWN, UP, &bottom_neighbour, &top_neighbour );
The function MPI_Cart_shift
doesn't shift anything. It lets
you find which processes you should exchange data with using,
e.g., MPI_Sendrecv
, if you would like to
achieve the effect of shifting the data left, right, up, or down
within the process grid.
Note that the words SIDEWAYS
, RIGHT
, etc, are
not special MPI terms. They simply stand for 1, or 0. It is
therefore up to us, which direction we want to call RIGHT
and
UP
.
The next part of the program simply exchanges messages about neighbours and Cartesian coordinates between the master and other processes.
One of the ways to address the possibility that a process rejected
from cartesian_communicator
is the master would be to take
this part of the program outside of the
if (cartesian_communicator != MPI_COMM_NULL)
statement.
The communication which takes place here does not refer to
cartesian_communicator
.