The book contains a course in the broad field of "high-performance computing", intended for students, graduate or post doctoral from universities and vocational high schools and for technical and research workers. The course is self contained but some global knowledge (e.g. of a high-level programming language) is assumed. The material can be used by lecturers in courses on computers, computer science or computational science, but can also be used by students directly without further guidance.
Topics: Experiences using a CMS package with commercial applications; Comparisons of the performance characteristics encountered when running applications under different environments; Experiences of using similar computing environments on different hardware platforms; Load balancing applications; Experiences of using various different environments - from Virtual Shared Memory to Message Passing; Application responsiveness and scalability; Parameter estimation for performance modelling of applications; Monitoring and modeling applications; Resource management and configuration; Task Management and synchronisation; Collaborative tools and techniques; Authentication, security and information surety; The use of emerging technologies to help tackle problem solving in a distributed computing environment and others.
Deadlines: Abstracts: 15th September 1996; Abstracts approved: 30th September 1996; Full papers: 15th December 1996.
Topics: Parallel Architectures; Memory Hierarchies; Parallel Algorithms; Scientific Computing; Parallel Languages; Programming Environments; Parallelizing Compilers; Special Purpose Processors; VLSI Systems; Performance Modeling/Evaluation; Signal & Image Processing Systems; Parallel Implementations of Application Tasks; Interconnection Networks and Implementation Technologies and others.
Deadlines: Papers: 20th September 1996; Notification: 13th December 1996; Camera-ready papers: 20th January 1997; Workshop proposals: 30th August 1996; Tutorial proposals: 31st October 1996; Exhibits: 31st October 1996.
See also http://cuiwww.unige.ch/~ipps97
See also http://www.cs.utexas.edu/users/rvdg/abstracts/SC96.html ABSTRACT: In this paper, we introduce a new parallel library effort, as part of the PLAPACK project, that attempts to address discrepencies between the needs of applications and parallel libraries. A number of contributions are made, including a new approach to matrix distribution, new insights into layering parallel linear algebra libraries, and the application of ``object based'' programming techniques which have recently become popular for (parallel) scientific libraries. We present an overview of a prototype library, the <bf> SL_Library </b>, which incorporates these ideas. Preliminary performance data shows this more application-centric approach to libraries does not necessarily adversely impact performance, compared to more traditional approaches.
See also http://lovelace.nas.nasa.gov/MPI-IO/
This volume represents a thorough revision of "Fortran 90 Explained". It includes more detailed explanations of many features with more examples (giving about 18 additional pages), as well as new appendices (on avoiding Fortran 77 extensions and an extended pointer example, a further 12 pages). Also, it incorporates all the interpretations, and has a completely new chapter on Fortran 95 (18 pages). It is a complete and authoritive description of Fortran 90/95.
Published by Oxford University Press, Oxford and New York, 1996, ISBN 0 19 851888 9. See also http://www.oup.co.uk/ (UK) or http://www.oup-usa.org (US)
Provides full MPI-1.1 functionality; implemented as a thin layer on top of the C MPI bindings; offers convienent and intuitive object-oriented abstractions for message passing and uses many of the powerful semantic features of the C++ language, such as data typing, polymorphism, etc.
Topics: Distributed database modeling and design techniques; Parallel and distributed object management; Interoperability in multidatabase systems; Parallel on-line transaction processing; Parallel and distributed query optimization; Parallel and distributed active databases; Parallel and distributed real-time databases; Multimedia and hypermedia databases; Databases and programming systems; Mobile computing and databases; Transactional workflow control; Parallel and distributed algorithms; Temporal databases; Data mining/Knowledge discovery; Use of distributed database technology in managing engineering, biological, geographic, spatial, scientific, and statistical data; Scheduling and resource management and others.
Deadlines: Papers: 1st November 1996; Notification: 1st March 1997.
Topics: Computational Chemistry and Biomedical Application; Communication and Computer Networks; Computational Physics / Astronomy; Computer Architecture; Computing Applications(Elctronics, Fluid Dynamics, Meteorology/Environmental Science, Solid Mechanics); Data Mining; Parallel Algorithms; Parallel Programming Languages and Tools; High Performance Parallel System and Performance Evaluation; Scalable I/O; Applications on Information Superhighway; Scientific Visualization and Workstation Clustering.
Deadlines: Papers, Tutorials, Round-tables, Panels, Visual Presentations, Research Exhibits and Exhibitors: 15th November, 1996;
A two day course offered in association with EPCC. We will follow the EPCC notes and course structure. Speakers from Edinburgh and Southampton will present the 8 sections of the course interspersed with practical sessions using NA Software's HPF compiler on the Meiko CS-2. This course will only take place if there is sufficient interest. (Free to members of UK higher education institutions)
Topics: End-user HPCN applications, computational science and computer science research in HPCN.
Deadlines: Extended Abstracts / Full Papers: 1st November 1996; Posters: 1st November 1996; Workshops: 1st November 1996; Notification: 1st February 1997.
See also http://www.wins.uva.nl/hpcn/ (From 15th August 1996)
It is difficult to write programs which are both correct and efficient even for a single MIMD parallel architecture. A program which is efficient in execution on one member of this architecture class is often either not portable at all to different members of the architecture class, or if portability is possible, the efficiency attained is usually not satisfactory on any architecture.
The conceptual basis of the approach we have taken to providing a solution for the problem of programming MIMD parallel architectures is based upon raising the level of abstraction at which parallel program structures are expressed and moving to a compositional approach to programming. The CODE 2.0 model of parallel programming permits parallel programs to be created by composing basic units of computation and defining relationships among them. It expresses the communication and synchronization relationships of units of computation as abstract dependencies. Runtime determined communications structures can be expressed.
Ready access to these abstractions is provided by a flexible graphical interface in which the user can specify them in terms of extended directed graphs. Both ease of preparation of correct programs and compilation to efficient execution on multiple target architectures is enabled. The compositional approach to programming focuses the programmer's attention upon the structure of the program, rather than development of small unit transformations. In the CODE 2.0 system, the units of computation are prepared using conventional sequential programming languages along with declaratively specified conditions under which the unit is enabled for execution.
The system is built upon a unique object-oriented model of compilation in which communication and synchronization mechanisms are implemented by parameterized class templates which are used to custom tailor the translation of abstract specifications in communication and synchronization to efficient local models.
The attainment of the goals of the research is measured in the following ways. There have been several uses of the CODE 2.0 system by casual users in parallel programming classes. The results are uniformly positive; the programs which are developed are simple and easy to read, and execute at least as efficiently as programs written in conventional parallel languages. Experimental measurement of the execution behavior of benchmark programs has shown that the executable code generated by CODE 2.0 is efficient, often within 5% or less, and sometimes more efficient than hand-generated parallel programs. Portability with retention of efficiency of execution has been demonstrated by implementations on two different execution environments; an implementation on the synchronous message paradigm given by Ada and in the shared-memory environment of the Sequent Dynix operating system.
Designed to only with the MPICH implementation of MPI and has been tested with versions V1.0.10 through V1.0.13 It also requires Tcl and Tk. it has been tested with Tcl7.3/Tk3.6, Tcl7.4/Tk4.0 and and Tcl 7.5/Tk 4.1 and works best with a colour display.