| <-
Prev | Index |
Next -> |
NHSE ReviewTM:
Comments · Archive ·
Search
PVP MPI (not a standard name) is derived from MPICH. It supports MPI applications within a single PVP (Parallel Vector Processor, such as the Cray J90, C90 and T90), using shared memory for communication, or spanning several PVPs (using TCP for communication). Because of the rarity of PVP clusters and the much slower speed of TCP communication, the rest of this discussion is about the shared memory version.
Shared memory PVP MPI is implemented in an interesting way, demonstrating the flexibility of the MPI process model. MPI processes are in fact threads within a single process. Through the use of special compiler options, all user-declared variables are local to a thread, so that separate threads do not directly ``see'' each other's data. Since all ``processes'' share the same address space, message transfers can be done with a single copy from source to destination (instead of using an intermediate buffer) and synchronization can be done using fast thread mechanisms. Applications using this process model are restricted to the SPMD model, where each MPI process is the same executable.
PVP MPI is fairly well-integrated into the environment when run on a single host, as far as process control goes -- an MPI application looks just like any multithreaded application. Special scheduling (e.g. gang scheduling) is possible, but not well supported (and not at all supported on multiple nodes). Multiple host jobs (using TCP) suffer from all of the problems of MPICH with the ch_p4 device. However, even for the shared memory version there are no special options for handling I/O, debuggers that understand MPI jobs, etc. Furthermore, PVP MPI suffers from the usual bugs associated with earlier versions of MPICH, including lack of MPI_Cancel and the problem with MPI-defined constants in Fortran.
| <-
Prev | Index |
Next ->
|
NHSE ReviewTM:
Comments · Archive
· Search