[ News | IPCA | Mirrors | Add | Search | Mail | Help | WoTUG ]
Abstract: This paper summarizes and organizes the views expressed and the issues discussed by a broad cross section of the participants at the workshop on Parallel Object-Oriented Methods and Applications (POOMA) held in Santa Fe, New Mexico between Dec 5-7, 1994 [11]. About 6 years after the first C++ conference, object-oriented programming (OOP) and its relevance to large-scale applications was discussed at this workshop. The special focus was on the effectiveness of OOP methodology in terms of performance when coupled with the parallel programming paradigm.
Authors: Aswini K. Chowdappa, Department of Computer Science & NSF Engineering Research Center for Computational Field Simulation, Mississippi State University, USA and Anthony Skjellum.
Abstract: Explicit parallel programming using the Message Passing Interface (MPI), a de facto standard created by the MPI Forum, is quickly becoming the strategy of choice for performance-portable parallel application programming on multicomputers and networks of workstations, so it is inevitably of interest to C++ programmers who use such systems. MPI programming is currently undertaken in C and/or Fortran-77, via the language bindings defined by the MPI Forum. While the committee deferred the job of defining a C++ binding for MPI to MPI-2, it is already possible to develop parallel programs in C++ using MPI, with the added help of one of several support libraries. These systems all strive to enable immediate C++ programming based on MPI. The first such enabling system, MPI++, is the focus of this chapter. MPI++ was an early effort on our part to let us leverage MPI while programming in C++. Here this system is, to a large extent, our vehicle to illustrate the value added of C++ in a message passing environment and, conversely, the value of MPI towards parallel programming with C++. We will be describing a performance-conscious alternative to exploit parallelism with C++ without the benefit of a portable, mature compiler environment suitable for a network of workstations or massively parallel computer. We emphasize performance, portability, good performance and good portability at the same time, and good design issues throughout, and that will put constraints on how eagerly we exploit certain features of C++ when creating our parallel environment on top of MPI.
Authors: Anthony Skjellum; Ziyang Lu; Purushotham V. Bangalore and Nathan Doss. Department of Computer Science and NSF Engineering Research Center for Computational Field Simulation, Mississippi State, MS 39762, USA.