next up previous
Next: Collective communicationbroadcast Up: first.c in detail Previous: Environmental enquiries

Sending and receiving messages

 

At this stage the situation is as follows: one of the processes already knows who is the master, namely the master process itself. But no other process knows that. The master must inform other processes about the choice that has been made. The master does it by sending a message to every other process:

if (i_am_the_master) {
   int_buffer[0] = host_rank;
   for (destination = 0; destination < pool_size; destination++)
      if (destination != my_rank) 
         MPI_Send(int_buffer, 1, MPI_INT, destination, 1001, 
                  MPI_COMM_WORLD);
}

In order to send a message you must have a preallocated buffer. In our case it is an integer array int_buffer. The contents of the message is transferred to the buffer, and the buffer is passed to MPI_Send. The second argument in the call to MPI_Send is the number of items in the buffer and the third argument is the type of those items. You must not use C types here. MPI provides its own constants, such as MPI_INT, MPI_CHAR, MPI_DOUBLE, which specify the type of transmitted data. The fourth argument is the rank of the distination process within the communicator which is specified by the sixth argument. The fifth argument is the message tag. This is a specific number which is assigned to each message. These numbers can be used for ordering or narrowing the scope of messages.

When messages arrive at the addressee, they will be read only if the addressee wants to access the message with the same tag number.

The function MPI_Send is described in section 3.2.1, page 16, of ``MPI: A Message-Passing Interface Standard''.

The part of the code which refers to receiving processes is shown below:

else {
   MPI_Recv(int_buffer, 1, MPI_INT, MPI_ANY_SOURCE, 1001, 
            MPI_COMM_WORLD, &status);
   host_rank = int_buffer[0];
}

In order to receive a message, you also must have a buffer into which the message will be transferred. You must specify the length of the buffer in the second argument to MPI_Recv and the type of expected data item in the third argument. The fourth argument specifies the rank of the process from which the message is to come. In this case we don't care where the message comes from, as long as it has a tag 1001, and as long as it comes within the MPI_COMM_WORLD communicator. The constant MPI_ANY_SOURCE is used to convey this indifference to MPI_Recv. The last argument is a pointer to status, which is a structure of type MPI_Status. Special MPI functions are available to read that structure, see section 5.8.1.

The function MPI_Recv is described in section 3.2.4, page 19, of ``MPI: A Message-Passing Interface Standard''.

After all processes with the exception of the sender have received the message, the whole process pool knows the rank of the master process.

The functions MPI_Send and MPI_Recv are fundamental. With these two functions and with environmental enquiry facilities covered in section 3.4.1, you can already build serious applications. John Salmon's famous N-body octal tree code, for example, has been designed to use message send and receive facilities only in order to ensure easy portability of the program between various message passing systems.

This section also illustrates how you can make various processes execute different parts of the code depending on their rank. This is a very common situation in message passing programs.



next up previous
Next: Collective communicationbroadcast Up: first.c in detail Previous: Environmental enquiries



Zdzislaw Meglicki
Tue Feb 28 15:07:51 EST 1995