----------------------------------------------------------------------------- Applied Parallel Research HPF Tools DataSheet ----------------------------------------------------------------------------- High Performance Fortran (HPF) Parallelization Tools from Applied Parallel Research, Inc. Here is a High Performance Fortran (HPF) pre-compiler that converts programs written in the HPF Subset language into a Fortran 77 SPMD (Single Program Multiple Data) parallelized program with message passing calls. The parallelized program is immediately compilable and executable on all the major MPP systems as well as networked clusters of workstations. APR's HPF pre-compiler, xhpf, is available today. Features include: * APR's FORGE Magic technology to automatically parallelize Fortran 77 into an HPF program. * HPF program consistency checking. * Parallelization of array assignments, FORALL, and DO loops. * Interactive review of parallelization strategies using FORGE Interactive DMP. * Parallel runtime performance analysis for locating inter-processor communication bottlenecks and load balancing problems. The xhpf pre-compiler utilizes the KAPR pre-processor from Kuck and Associates to lower the Fortran 90 constructs in an HPF Subset program to standard Fortran 77. The second pass of xhpf converts this into a SPMD parallelized Fortran 77 program with calls to APR's runtime parallel library that acts as an interface to a number of common message passing data communication libraries. The parallel code may be executed today on the following supported MPP systems: * IBM SP1 using EUI * Intel Paragon using NT * Meiko using Parmacs * Cray T3D using PVM * nCUBE using native libraries * CM-5 using native libraries * Networked workstation clusters using PVM, Express, or Linda. --------------------------------------- Global HPF Program Consistency Checking --------------------------------------- Because local and global consistency of HPF directives vs. program context is critical, xhpf includes a special pass that checks all directives against the static analysis of the program and issues diagnostics for arrays that are illegally or inconsistently partitioned. It insures that the HPF directives it finds in your program actually make sense. ------------------------------- Generate HPF Programs with xhpf ------------------------------- In its automatic parallelization mode, xhpf will convert a serial Fortran 77 program into an HPF program. This utilizes APR's unique Magic technology to analyze the program in a global perspective and determine the arrays to be partitioned and the loops to be distributed and arrive at an initial parallelization strategy. This process can be driven by real serial execution timings using the program as instrumented by xhpf, or by determining the most significant loops and arrays statically. Preliminary tests show APR's Magic technology achieving 80% of the performance obtained from hand parallelization of certain applications. xhpf will also output the original Fortran program with the parallelization expressed in HPF directives which can then be refined by the user and fed back into the pre-compiler to generate a compilable code. ----------------------------- Parallel Performance Analysis ----------------------------- To refine a program's parallelization strategy for distributed memory systems, we need to know how well or poorly the program performs in parallel. In particular, we need to know where the bottlenecks for interprocessor communication are and the cause of losses due to poor load balancing of processors and excessive overhead. xhpf can also instrument the parallelized programs it generates to produce a timing report when run on the target multiprocessor that profiles the program's parallel performance and identifies data communication as well as routine and loop timings. With parallel performance timings in hand, you can fine tune the parallelization strategy by restructuring the code or inserting directives to alter the data partitioning or loop distribution decisions. --------------------------- HPF Subset Accepted by xhpf --------------------------- Fortran 90 Features: Fortran 77, MIL-STD-1753, array sections, array constructors, operations on arrays or array sections, array assignment, WHERE statement and block, array valued external functions, automatic arrays, ALLOCATABLE and ALLOCATE/DEALLOCATE, assumed shape arrays, FORALL statement. Fortran 90 Intrinsics: Argument presence, numeric elemental functions, bit manipulation elemental functions, vector and matrix multiply functions, array reduction, inquiry, construction, reshape functions, array manipulation and location functions. HPF Directives: DISTRIBUTE (on 1 dimension), ALIGN, VIEW, INHERIT, TEMPLATE, INDEPENDENT Extended HPF Features (not part of HPF Subset): REDISTRIBUTABLE (using the APR PARTITION directive), Extrinsic Procedures. -------------- MAGIC Products -------------- APR offers three MAGIC Pre-Compilers: dpf for distributed memory systems spf for shared memory systems xhpf for HPF directives and Fortran 90 array syntax on distributed memory systems ------------------ Other APR Products ------------------ forge90 Interactive parallelizers for distributed & shared memory systems forgex FORGE Explorer Motif GUI global Fortran program browser --------------------- Platforms and Targets --------------------- APR's products are available to run on various systems including HP, SUN, IBM RS/6000, DEC Alpha, and Cray. Parallelizations and runtime support are available for: workstation clusters, IBM SP1 and POWER/4, Intel Paragon, nCUBE, Meiko, Cray T3D, TMC CM-5. ----------------- Other Information ----------------- For further information on these tools and our parallelization techniques training workshops, contact us at: -------------------------------------------------------------------------- Applied Parallel Research, Inc. 550 Main Street, Suite I, Placerville, CA 95667 Phone: 916/621-1600 Fax: 916/621-0593 email: forge@netcom.com -------------------------------------------------------------------------- Copyright * 1993 Applied Parallel Research, Inc. 11/93