[ News | IPCA | Mirrors | Add | Search | Mail | Help | WoTUG ]
Abstract: ELROS, an acronym for Embedded Language for Remote Operation Services, was developed at LLNL for programming distributed applications. ELROS statements can be embedded in a conventional language such as C. This makes it easier to develop distributed applications. ELROC supports both synchronous and asynchronous operations. ELROC also provides a good exception handling facility. Although ELROS provides a good programming interface, it does not provide primitives for collective operations. In this paper we present some of the collective operations that were writting using ELROS. We also present programs to demonstrate how these operations can be implemented using sockets. Advantages and disadvantages of using ELROS over sockets are also discussed.
Authors: Kishore Viswanathan and Anthony Skjellum.
Authors: Anthony Skjellum, Mississippi State University, USA and Brian K. Grant, U. Washington.
Abstract: MPI is the de facto message passing standard for multicomputers and networks of workstations, established by the MPI Forum, a group of universities, research centers, and national laboratories (from both the United States and Europe), as well as multinational vendors in the area of high performance computing. MPI has been implemented already by several groups. Worldwide acceptance of MPI has been quite rapid.
This paper overviews several areas in which MPI can be extended, discusses the merits of making such extensions, and begins to demonstrate how some of these extensions can be made. In some areas, such as intercommunicator extensions, significant progress has been made by us already. In other areas (such as remote memory access), we are merely proposing extensions to MPI that we have not yet reduced to practice. Furthermore, we point out that other researchers are evidently working in parallel with us on their own extension concepts for MPI.
Authors: Anthony Skjellum; Nathan E. Doss; Kishore Viswanathan and Aswini Chowdappa. Bangalore Computer Science Department and NSF Engineering Research Center Mississippi State University, USA.
Abstract: MPI is the new standard for multicomputer and cluster message passing introduced by the Message-Passing Interface Forum (MPIF) in April 1994. This paper describes the current inter-communicator interface found in MPI and the reasons for its current design. We also motivate the need for additional inter-communicator operations and introduce the extensions we have included in MPIX (MPI eXtension Library), a library of extensions to MPI that we are currently developing. Inter-communicators may be used for a variety of purposes such as in client/server applications (i.e., I/O and graphics servers) and for process management in dynamic process environments and multi-protocol implementations; MPI's definitions are unnecessarily restrictive, so we extend them here. We discuss the inter-communicator collective operations defined in MPIX and illustrate their use. We also discuss additional inter-communicator construction routines not in the original MPI interface, but that are provided in MPIX. Our final contribution is a strategy for extending virtual topologies to support inter-communicators with the MPIX inter-communicator library.
Authors: Anthony Skjellum; Nathan E. Doss and Kishore Viswanathan. Bangalore Computer Science Department and NSF Engineering Research Center Mississippi State University, USA.
Abstract: P4 (Portable Programs for Parallel Processors) is a popular message passing system. The Pthreads library is a POSIX-standard implementation that supports multiple flows of control, called `threads' within a process. MPI(Message Passing Interface) is the emerging message passing system which will soon be the industry standard system. This paper illustrates using multiple threads within the P4 processes and thread-safe message passing. It also describes the various issues that have to be looked into when dealing with the two packages (P4 and Pthreads). We demonstrate thread-safe message passing by means of some test programs. Finally we identify areas where MPI is potentially unsafe in a multithreaded environment. We delve into the details of these issues and discuss introducing multi-threaded message passing into the MPICH implementation in the near feature.
Authors: Aswini K. Chowdappa; Anthony Skjellum and Nathan E. Dossy. Computer Science and NSF Engineering Research Center for Computational Field Simulationy, Mississippi State University, USA.
Abstract: This paper presents a new kind of portability system, Unify, which modifies the PVM message passing system to provide (currrently a subset of) the Message Passing Interface (MPI) standard notation for message passing. Unify is designed to reduce the effort of learning MPI while providing a sensible means to make use of MPI libraries and MPI calls while applications continue to run in the PVM environment. We are convinced that this strategy will reduce the costs of porting completely to MPI, while providing a gradual environment within which to evolve. Furthermore, it will permit immediate use of MPI-based parallel libraries in applications, even those that use PVM for user code.
We describe several paradigms for supporting MPI and PVM message passing notations in a single environment, and note related work on MPI and PVM implementations. We show the design options that existed within our chosen paradigm (which is an MPI interface added to the base PVM system), and why we chose that particular approach. We indicate the total evolution path of porting a PVM application to MPI with the help of porting libraries. Finally, we indicate our current directions and planned future work.
Authors: Vaughan; Skjellum (tony@aurora.cs.msstate.edu); Reese and Cheng. Mississippi State University, NSF Engineering Research Center for Computational Field Simulation & Department of Computer Science.
Abstract: Zipcode is a message-passing and process-management system that was designed for multicomputers and homogeneous networks of computers in order to support libraries and large-scale multicomputer software. The system has evolved significantly over the last five years, based on our experiences and identified needs. Features of Zipcode that were originally unique to it, were its simultaneous support of static process groups, communication contexts, and virtual topologies, forming the "mailer" data structure. Point-to-point and collective operations reference the underlying group, and use contexts to avoid mixing up messages. Recently, we have added "gather-send" and "receive-scatter" semantics, based on persistent Zipcode "invoices," both as a means to simplify message passing, and as a means to reveal more potential runtime optimizations. Key features in Zipcode appear in the forthcoming MPI standard.
Authors: Anthony Skjellum, Chemical Engineering, California Institute of Technology, USA; Steven G. Smith, NSF Engineering Research Center for Computational Field Simulation, Mississippi State University, USA; Nathan E. Doss; Alvin P. Leung, Northeast Parallel Architectures Center, Syracuse University, USA and Manfred Morari.
Abstract: This paper describes the architecture and implementation of an integrated message-passing environment consisting of Zipcode and PVM. Zipcode is a highlevel message-passing system for multicomputers and homogeneous networks of computers. PVM is a relatively low-level message-passing system designed for multicomputers and heterogeneous networks of computers. Although PVM provides a workable and easyto-use message-passing system, it does not have some of the high-level constructs, such as communication contexts, required to develop reliable and scalable parallel libraries and large-scale distributed software. By porting Zipcode's high-level constructs on top of PVM, the integrated environment facilitates existing PVM applications to gradually migrate from a low-level to a high-level message-passing paradigm. In the integrated environment described here, Zipcode is ported on top of PVM via an intermediate layer, the emulated Cosmic Environment/Reactive Kernel (CE/RK) layer. Such an integrated environment allows programmers to utilize high-level Zipcode constructs as well as low-level PVM calls. Implementation details of the CE/RK primitives are presented. Planned future enhancements include performance improvement of the integrated system as well as adding heterogeneous environment support.
Author: Li-wei H. Lehman, NSF Engineering Research Center for Computational Field Simulation, Mississippi State University, Mississippi State, Mississippi, 39762, USA. 1993