next up previous
Next: Orienteering -- the Up: ninth.c in detail Previous: Declaring a simple

The all-gather operation

 

All these operations were carried out by every node individually. Thus by now every node knows that it will have to look after particle_number of particles. The next function call collects the values of particle_number from each node and puts them together into an array counts:

MPI_Allgather ( &particle_number, 1, MPI_INT, counts, 1, MPI_INT,
		MPI_COMM_WORLD );
where, e.g., count[3] corresponds to the value of particle_number on node number 3.

We have already encountered one MPI_All... operation before, MPI_Allreduce. It worked like MPI_reduce, but the results of the reduction operation were delivered to each participating node, not just to the root node. Similarly we have two related operations: MPI_gather and MPI_Allgather. In the latter case, the outcome of gathering data contributed by the nodes into an array will be distributed amongst all nodes participating in the operation.

MPI_Allgather takes a number, specified by its second argument, of items of MPI type specified by its third argument, from a buffer, pointed to by its first argument, from each node in the communicator specified by its last argument, and assembles the results together into an array specified by its fourth argument. From the batch of arguments taken from each node only a subset of length given by the fifth argument to the function is deposited in the destination array. That subset can actually be a full set as is the case in our example.

MPI_Allgather is discussed in section 4.7, page 107, of ``MPI: A Message-Passing Interface Standard''.



Zdzislaw Meglicki
Tue Feb 28 15:07:51 EST 1995