| <-
Prev | Index |
Next -> |
NHSE ReviewTM:
Comments · Archive ·
Search
One of the most common mistakes made by MPI users is to assume that an MPI implementation provides some amount of message buffering. To buffer a message is to make a temporary copy of a message between its source and destination buffers. This allows MPI_Send to return control to the caller before a corresponding MPI_Recv has been called. In this way, it is possible for two processes to exchange data by first calling MPI_Send and then calling MPI_Recv.
MPI does not require any message buffering, and portable applications must not rely on it. To avoid deadlock conditions, applications should use the non-blocking routines MPI_Isend and/or MPI_Irecv. It is common for programs ported from PVM or applications that use message ``portability'' packages to make assumptions about buffering.
The implementations described in this review vary greatly in the amount of buffering they provide, and some allow the user to control the amount of buffering through environment variables. For instance, the T3E implementation currently buffers messages of arbitrary size, by default, while The SGI implementation may buffer as few as 64 bytes, depending on environment variables.
The amount of buffering is deliberately left open by the MPI standard so that implementors can make platform-specific optimizations. Reasons to provide a large amount of buffering include reducing application synchronization and improving small message performance. Reasons not to provide a large amount of buffering include improving large message performance, reducing memory management complexity and overhead and reducing MPI internal memory use.
| <-
Prev | Index |
Next ->
|
NHSE ReviewTM:
Comments · Archive
· Search