next up previous
Next: Summary Up: fifth.c in detail Previous: The server process

The worker process

 

Worker processes have a harder life. They begin by sending a request for a CHUNKSIZE of random numbers to the random number server:

MPI_Send ( &request, 1, MPI_INT, server, REQUEST, world );

Then they find their new rank number within the new workers communicator:

MPI_Comm_rank ( workers, &my_worker_rank );
This number does not have to be the same as their rank number within the MPI_COMM_WORLD communicator. We don't make any use of that number in this program. You can print it on a log file if you would like to see it. Here we just want to make the point that a process can have several rank numbers associated with different communicators.

Now the workers enter the while loop. They receive a batch of random numbers from the server:

MPI_Recv( rands, CHUNKSIZE, MPI_INT, server, REPLY, world,
          &status );
scale them and convert them into points which fit into the 2 x 2 square. Then for every point they check if it belongs to the circle, or if it is left outside. Depending on the result they increment either the in or the out counters.
for (i = 0; i < CHUNKSIZE;) {
   x = (((double) rands[i++])/max) * 2 - 1;
   y = (((double) rands[i++])/max) * 2 - 1;
   if (x * x + y * y < 1.0) in++;
   else out++;
}

Now they exchange their information with all other processes within the workers communicator. They use a new MPI function, which we haven't seen yet:

MPI_Allreduce(&in, &totalin, 1, MPI_INT, MPI_SUM, workers);
MPI_Allreduce(&out, &totalout, 1, MPI_INT, MPI_SUM, workers);
This function works the same way as MPI_Reduce, but this time the reduced data ends up not with one selected process, but with all processes participating in the communicator. Function MPI_Allreduce is discussed in section 4.9.5, page 122, ``MPI: A Message-Passing Interface Standard''.

The data to be contributed to the pool is placed in the first slot of the function, and the number of items is given by its third argument. The reduced data is written on the second argument. The MPI type (not the C type!) of the data is given by the fourth argument, and the type of the operation to be performed by the data is given by the fifth argument.

Now every process can evaluate , the accuracy of the result, and decide if it is time to quit:

Pi = (4.0*totalin)/(totalin + totalout);
error = fabs( Pi - 3.141592653589793238462643);
done = ((error < epsilon) || ((totalin + totalout) > 1000000));
They all should arrive at identical results, because they work with identical data, thanks to MPI_Allreduce.

If the job is done, they set the request to be sent to the random numbers server to 0, otherwise the request is set to 1:

request = (done) ? 0 : 1;

Observe that only the master process will send the termination signal to the random numbers server:

if (i_am_the_master) {
   printf("\rpi = %23.20lf", Pi); fflush(stdout);
   MPI_Send( &request, 1, MPI_INT, server, REQUEST, world );
}
else {
   if (request)
      MPI_Send( &request, 1, MPI_INT, server, REQUEST, world );
}

Before we quit the program, we release the workers communicator:

MPI_Comm_free(&workers);



next up previous
Next: Summary Up: fifth.c in detail Previous: The server process



Zdzislaw Meglicki
Tue Feb 28 15:07:51 EST 1995