NHSE ReviewTM 1996 Volume First Issue

Overview of Recent Supercomputers

| <- HREF="node2.html" Prev | Index | Next -> |
NHSE ReviewTM: Comments · Archive · Search


Chapter 2 -- The Main Architectural Classes

Since many years the taxonomy of Flynn [5] has proven to be useful for the classification of high-performance computers. This classification is based on the way instruction- and data streams are arranged and comprises four main architectural classes. We will first briefly sketch these classes and afterwards fill in some details when each of the classes is described separately. Although the difference between shared- and distributed memory machines seems clear cut, this is not always entirely the case from the user's point of view. For instance, the late Kendall Square Research systems employed the idea of ``virtual shared memory'' on a hardware level. Virtual shared memory can also be simulated at the programming level: The first draft proposal for High Performance Fortran (HPF) was published in November 1992 [6] which by means of compiler directives distributes the data over the available processors. The proposal was fixed by May 1993. Therefore, the system on which HPF is implemented will act in this case as a shared memory machine to the user. Other vendors of Massively Parallel Processing systems (the buzz-word MPP systems is fashionable here), like Convex and Cray, also support proprietary virtual shared-memory programming models which means that these physically distributed memory systems, by virtue of the programming model, logically will behave as shared memory systems. In addition, packages like TreadMarks [1] provide a virtual shared memory environment for networks of workstations.

Another trend that has come up in the last few years is distributed processing. This takes the DM-MIMD concept one step further: instead of many integrated processors in one or several boxes, workstations, mainframes, etc., are connected by Ethernet, FDDI, or otherwise and set to work concurrently on tasks in the same program. Conceptually, this is not different from DM-MIMD computing, but the communication between processors is often orders of magnitude slower. Many packages to realise distributed computing, commercial, and non-commercial are available. Examples of these are Parasoft's Express (commercial), PVM (standing for Parallel Virtual Machine, non-commercial), and MPI (Message Passing Interface, [14] also non-commercial). PVM and MPI have been adopted for instance by Convex, Cray, IBM and Intel for the transition stage between distributed computing and MPP on the clusters of their favorite processors and they are available on a large amount of distributed memory MIMD systems and even on shared memory MIMD systems for compatibility reasons. In addition there is a tendency to cluster shared memory systems, for instance by HIPPI channels, to obtain systems with a very high computational power. E.g., Silicon Graphics is already providing such arrays of systems, the Intel Paragon with the MP (Multi Processor) nodes, and the NEC SX-4 also have this structure. The Convex Exemplar SPP-1200 could be seen as a more integrated example (although the software environment is much more complete and allows shared memory addressing).


Copyright © 1996 Aad J. van der Steen and Jack J. Dongarra

| <- HREF="node2.html" Prev | Index | Next -> |
NHSE ReviewTM: Comments · Archive · Search
NHSE: Software Catalog · Roadmap


Copyright © 1996 NHSE ReviewTM All Rights Reserved.
Lowell W Lutz (lwlutz@rice.edu) NHSE ReviewTM WWWeb Editor