Newest entries are first. Older changes can be found here.
31st AUgust 1995
- /parallel/environments/pvm3/distribution/xpvm/src.ptch.110to111.tarzuu.Z
- XPVM Source patch from 1.1.0 to 1.1.1
- /parallel/standards/mpi/anl/mpiman.ps.Z
- MPICH Model MPI Implementation Reference Manual (Draft)
ABSTRACT:
This document contains detailed documentation on the routines that
are part of the MPICH model MPI implementation. As an alternate to
this manual, the reader should consider using the script mpiman; this
is a script that uses xman to provide a X11 Window System interface to
the data in this manual.
Authors: William Gropp, Ewing Lusk Mathematics and Computer Science
Division, Argonne National Laboratory, USA. and Anthony Skjellum,
Department of Computer Science, Mississippi State University, USA..
- /parallel/standards/mpi/anl/adiman.ps.Z
- MPICH ADI Implementation Reference Manual (Draft)
ABSTRACT:
This document contains detailed documentation on the routines that
are part of the ADI part of the MPICH model MPI implementation. The
ADI is a multi-level interface; there are some routines (the core)
that must be implemented, and an additional set (the extensions) that
can be implemented as necessary and desired for improved performance
(see the collective operations below). In addition, the ADI may itself
be implemented on a lower-level interface called the "channel"
interface; this is detailed in a separate report. As an alternate to
this manual, the reader should consider using the script mpiman; this
is a script that uses xman to provide a X11 Window System interface to
the data in this manual. The ADI definition presented here is still
evolving; while we intend to make no major incompatible changes, we
can not rule them out. Unlike MPI, the ADI is not a standard but is
rather part of a research project.
Authors: William Gropp, Ewing Lusk Mathematics and Computer Science
Division, Argonne National Laboratory, USA. and Anthony Skjellum,
Department of Computer Science, Mississippi State University, USA..
- /parallel/standards/mpi/anl/workingnote/newadi.ps.Z
- MPICH Working Note: Creating a new MPICH device using the Channel
interface (DRAFT)
ABSTRACT:
The MPICH implementation of MPI uses a powerful and efficient layered
approach to simplify porting MPI to new systems. One interface that
can be used is the channel interface; this interface defines a
collection of simple data-transfer operations. This interface can
adapt to additional functionality, such as asynchronous or nonblocking
transfers or remote memory operations. This paper describes this
interface, some of the special issues, and gives instructions on
creating a new MPICH implementation by implementing just a few
routines.
Authors: William Gropp, Argonne National Laboratory, Mathematics and
Computer Science Division and Ewing Lusk, University of Chicago.
30th August 1995
- /parallel/environments/chimp/release/
- CHIMP (Common High-level Interface to Message Passing)
project from the EPCC (Edinburgh Parallel Computing Centre)
at the University of Edinburgh, Scotland.
- /parallel/environments/chimp/release/chimpv2.1.1b.tar.Z
- CHIMP v2.1.1b distribution
- /parallel/environments/chimp/release/chimp-sun4.tar.Z
- CHIMP binary distribution for Sun 4 (SPARC)
- /parallel/vendors/inmos/ieee-hic/data/C101-04.ps.gz
- ST C101 (rev 4) Parallel DS-Link Adaptor Datasheet - Engineering
Data
This allows high speed serial DS-Links to be interfaced to buses
peripherals and microprocessors. It is particular suitable to
interfacing such devices to interconnects which deal in packets
consisting of data and header information. This header information may
be used to demultiplex packets from different sources and/or route
them through one or more switches. It has two modes of operations - in
the first (Transparent Mode), with packetization disabled, it provides
simple access to the DS-Link, all data provided to the STC101 is
transmitted down the DS-Link. In the second (Packetizing Mode) it can
be used by devices such as processors to use such things as the ST
C104 Asynchronous Packet Switch (APS) [C104.06.ps.gz below] (datasheet
42 1 470 06). In both modes it can be used as one of: 16 bit processor
i/f, 32 bit processor i/f or 16 bit processor i/f with token
interfaces. This document includes changes for Revs A and B silicon.
August 1995. 66 pages. 4 Mbytes (4044143 bytes) uncompressed.
Document number 42 1593 04.
- /parallel/environments/lam/distribution/lam52-doc.nologo.tar.gz
- LAM 5.2 documentation (without logo)
- /parallel/teaching/hpctec/epcc/tech-watch/hpf-course.html.tar.gz
- High Performance Fortran (HPF) Course Students Notes as HTML files.
- /parallel/standards/hippi/hippi-serial_1.5.ps.gz
- /parallel/standards/hippi/hippi-serial_1.5_w_bars.ps.gz
- High-Performance Parallel Interface - Serial Specification
ABSTRACT:
This standard specifies a physical-level interface for transmitting
digital data at 800 Mbit/s or 1600 Mbit/s serially over fiber-optic or
coaxial cables across distances of up to 10 km. The signalling
sequences and protocol used are compatible with HIPPI-PH, ANSI
X3.183-1991, which is limited to 25 m distances. HIPPI-Serial may be
used as an external extender for HIPPI-PH ports, or may be integrated
as a host's native interface without HIPPI-PH.
- /parallel/standards/hippi/hippi-serial_1.5_changes.ps.gz
- Changes between HIPPI-Serial Rev 1.4 and Rev 1.5
- /parallel/standards/hippi/hippi-sc_2.8.ps.gz
- High-Performance Parallel Interface - Physical Switch Control
This is an X3T11 maintenance copy of of ANSI X3.222-1993
ABSTRACT:
This standard provides a protocol for controlling physical layer
switches which are based on the High-Performance Parallel Interface, a
simple high-performance point-to-point interface for transmitting
digital data at peak data rates of 800 or 1600 Mbit/s between
data-processing equipment.
- /parallel/standards/hippi/minutes/aug95_hippi_min.ps.gz
- /parallel/standards/hippi/minutes/aug95_hippi_min.txt
- Minutes for August 1995 HIPPI meeting
- /parallel/standards/hippi/minutes/jun95_hippi_min.ps.gz
- /parallel/standards/hippi/minutes/jun95_hippi_min.txt
- Minutes for June 1995 HIPPI meeting
- /parallel/libraries/memory/global-array/
- GA Toolkit developed at Molecular Science Research Center in
Pacific Northwest Laboratory, USA. It provides portable and
efficient "shared-memory" programming interface through which
each process in a MIMD parallel program can asynchronously
access logical blocks of physically distributed matrices,
without need for explicit cooperations by other
processes. Platforms: SP1, IPSC, Delta, Paragon, KSR-2,
workstations.
- /parallel/libraries/memory/global-array/global2.0.tar.Z
- Global Array (GA) Toolkit V2.0 distribution
- /parallel/libraries/memory/global-array/global1.3.1.tar.Z
- Global Array (GA) Toolkit V1.3.1 distribution
- /parallel/libraries/memory/global-array/global1.2.tar.Z
- Global Array (GA) Toolkit V1.2 distribution
- /parallel/libraries/memory/global-array/Supercomputing94.ps.Z
- Global Arrays: A Portable 'Shared-Memory' Programming Model for
Distributed Memory Computers
ABSTRACT:
Portability, efficiency, and ease of coding are all important
considerations in choosing the pr ogramming model for a scalable
parallel application. The message-passing programming model is widely
used because of its portability, yet some applications are too complex
to code in it while also trying to maintain a balanced computation
load and avoid redundant computations. The shared-memory programming
model simplifies coding, but it is not portable and often provides
little control over interprocessor data transfer costs. This paper
describes a new approach, called Global Arrays GA, that combines the
better features of both other models, leading to both simple coding
and efficient execution. The key concept of GA is that it provides a
portable interface through which each process in a MIMD parallel
program can asynchronously access logical blocks of physically
distributed matrices, with no need for explicit cooperation by other
processes. We have implemented GA libraries on a variety of computer
systems, including the Intel DELTA and Paragon, the IBM SP-1 - all
message-passers, the Kendall Square KSR-2 - a nonuniform access
shared-memory machine, and networks of Unix workstations. We discuss
the design and implementation of these libraries, report their
performance, illustrate the use of GA in the context of computational
chemistry applications, and describe the use of a GA performance
visualization tool.
- /parallel/performance/tools/pmt/
- Partition Management Tool (PMT) binaries for various architectures.
- /parallel/performance/tools/pmt/pmt.irix.R1.3.tar.gz
- /parallel/performance/tools/pmt/pmt.irix.R1.2.tar.gz
- /parallel/performance/tools/pmt/pmt.irix.tar.gz
- PMT for SGI IRIX
- /parallel/performance/tools/pmt/pmt.paragon.R1.31.tar.gz
- /parallel/performance/tools/pmt/pmt.paragon.R1.3.tar.gz
- /parallel/performance/tools/pmt/pmt.paragon.R1.2.tar.gz
- /parallel/performance/tools/pmt/pmt.paragon.tar.gz
- PMT for Paragon
- /parallel/performance/tools/pmt/pmt.solaris2.4.R1.31.tar.gz
- PMT for sun Solaris 2.4
- /parallel/performance/tools/pmt/pmt.solaris.R1.3.tar.gz
- /parallel/performance/tools/pmt/pmt.solaris.R1.2.tar.gz
- /parallel/performance/tools/pmt/pmt.solaris.tar.gz
- PMT for sun Solaris (earlier?)
- /parallel/vendors/intel/paragon/software/aachen/
- Aachen University, Germany Paragon tools
- /parallel/vendors/intel/paragon/software/aachen/apps/
- Applications
- /parallel/vendors/intel/paragon/software/aachen/apps/fract.tar.gz
- Fract v2.1 ported to the Intel Paragon. It creates fractals of types:
Mandelbrot set; Julia set of z^2+c, c*sinz, c*cosz, c*expz; Spider;
Phoenix; Glass; Noel1; Noel2; Newton Power; Tetrate; Barnsley1;
Barnsley2 and Barnsley3. This is implemented on the Paragon as a
client-server model with node 0 being the server and nodes 1-n being
clients and outputs an image in GIF format.
Author: Markolf Gudjons <markolf@lfbs.rwth-aachen.de>.
- /parallel/vendors/intel/paragon/software/aachen/contrib/
- Tools for the Intel Paragon XP/E and XP/S. See also:
http://www.lfbs.rwth-aachen.de/GnuTools.html
- /parallel/vendors/intel/paragon/software/aachen/contrib/gnutools.README
- Information on GNU tools for the Intel Paragon.
- /parallel/vendors/intel/paragon/software/aachen/contrib/GPL.TXT
- GNU General Public License (GPL) for all GNU tools.
- /parallel/vendors/intel/paragon/software/aachen/contrib/gnutools.MANIFEST
- List of versions of GNU tools
- /parallel/vendors/intel/paragon/software/aachen/contrib/gnutools-bin.tar.gz
- GNU tools binaries for Intel Paragon
- /parallel/vendors/intel/paragon/software/aachen/contrib/gnutools-src.tar.gz
- GNU tools sources for Intel Paragon
- /parallel/vendors/intel/paragon/software/aachen/contrib/bash.gz
- /parallel/vendors/intel/paragon/software/aachen/contrib/README.bash
- The Bourne Again Shell (bash).
- /parallel/vendors/intel/paragon/software/aachen/contrib/bison-1.19.tar.gz
- GNU Bison - yacc-like grammar generator
- /parallel/vendors/intel/paragon/software/aachen/contrib/gcc-2.5.8.tar.gz
- GNU C (gcc), C++ (g++) compiler suite
- /parallel/vendors/intel/paragon/software/aachen/contrib/libg++-2.5.3.tar.gz
- GNU c++ run time libraries, required for g++.
- /parallel/vendors/intel/paragon/software/aachen/contrib/gdb-4.12.tar.gz
- /parallel/vendors/intel/paragon/software/aachen/contrib/README.gdb
- The GNU debugger for the Intel Paragon
- /parallel/vendors/intel/paragon/software/aachen/contrib/gdb.tar.gz
- GDB binaries for Mach3, OSF with some documentation.
- /parallel/vendors/intel/paragon/software/aachen/contrib/gtar.gz
- /parallel/vendors/intel/paragon/software/aachen/contrib/README.gtar
- The GNU tar program (requires gzip).
- /parallel/vendors/intel/paragon/software/aachen/contrib/gzip
- /parallel/vendors/intel/paragon/software/aachen/contrib/gzip.Z
- /parallel/vendors/intel/paragon/software/aachen/contrib/README.gzip
- The GNU zip program (required for gtar).
- /parallel/vendors/intel/paragon/software/aachen/contrib/rdate.tar.gz
- /parallel/vendors/intel/paragon/software/aachen/contrib/rdate.README
- rdate for the Intel Paragon.
- /parallel/vendors/intel/paragon/software/aachen/papers/
- Papers
- /parallel/vendors/intel/paragon/software/aachen/papers/hicss.ps.gz
- PUMA: An Operating System for Massively Parallel Systems
ABSTRACT:
This paper presents an overview of PUMA, Performance-oriented,
User-managed Messaging Architecture, a message passing kernel. Message
passing in PUMA is based on portals - an opening in the address space
of an application process. Once an application process has established
a portal, other processes can write values into the portal using a
simple send operation. Because messages are written directly into the
address space of the receiving process, there is no need to buffer
messages in the PUMA kernel and later copy them into the applications
address space. PUMA consists of two components: the quintessential
kernel Q-Kernel and the process control thread PCT. While the PCT
provides management decisions, the Q-Kernel controls access and
implements the policies specified by the PCT.
- /parallel/vendors/intel/paragon/software/aachen/papers/paragon_linpack_benchmark.ps.Z
- Lu Factorization and the Linpack Benchmark on the Intel Paragon
ABSTRACT:
An implementation of the LINPACK benchmark is described which
achieves 72.9 Gflop/sec on 1872 nodes of an Intel Paragon.
Implications concerning the architecture of the Paragon and the
necessity of a high performance operating system like SUNMOS.
- /parallel/vendors/intel/paragon/software/aachen/papers/pgon_selfstudy.ps.gz
- Parallel Processing: A Self-Study Introduction; A First Course in
Programming the Intel Paragon
ABSTRACT:
This booklet is written in the hope that we can accelerate the
learning process of readers using an Intel Paragon computer for the
first time. It is not intended as a replacement for the manufacturers
programming and reference manuals, but rather to complement them and
to guide the new user through the jungle of terminology and
programming techniques. We hope that we have written this guide such
that the reader can complete this tutorial without requiring such
manuals. Our long experience with advanced computer systems has shown
us that these users want to become familiar with a new system as
quickly as possible, without having to wade through, say, a metre
shelf of manuals to be able to write a simple program. Rather than
write yet another tutorial, we have tried to give the user guidance in
using the Paragon, with concise information, with the hope that they
can get up to speed as quickly as possible. Very little instruction on
parallel algorithms is given in this guide, we have concentrated
instead on use of the system. The text and examples in this guide were
developed on the 100 node system at parallab. Almost everything in
this guide should be applicable to all Paragon system configurations.
COPYRIGHT NOTICE: This document may be distributed freely for
educational purposes provided it is distributed in its entirety,
including this copyright notice. This booklet may be used as course
material or as on-line documentation provided acknowledgement is given
to parallab . If copies of this document are distributed then parallab
must be informed at the address below: parallab secretary, parallab,
Dept. of Informatics, University of Bergen, N-5020 Bergen, Norway
email: adm@parallab.uib.no
Author: Jeremy Cook.
- /parallel/vendors/intel/paragon/software/aachen/tools/
- Tools
- /parallel/vendors/intel/paragon/software/aachen/tools/misc/
- Miscellaneous utilities
- /parallel/vendors/intel/paragon/software/aachen/tools/misc/pmt.tar.gz
- Partition Management Tool (PMT) binaries
- /parallel/vendors/intel/paragon/software/aachen/tools/misc/pstat.tar.gz
- Displays gang scheduling parallel applications running in the compute
partition. Binary only.
- /parallel/vendors/intel/paragon/software/aachen/tools/misc/rpcgen.tar.gz
- rpcgen tool (binary only).
Author: Adrian <adri@lfbs.rwth-aachen.de>.
- /parallel/vendors/intel/paragon/software/aachen/tools/misc/xpartinfo.tar.gz
- X11 wrapper around partinfo -s. Requires pstat. (only last author
given).
Author: J. David Morgenthaler <jdm@sdsc.edu>.
- /parallel/vendors/intel/paragon/software/aachen/tools/misc/xvmstat.tar.gz
- xvmstat - A tool for visualizing virtual memory on the service nodes
of a Paragon.
- /parallel/performance/tools/pablo/
- This directory contains include files, SDDF libraries and utility
program executables that are part of the Pablo Performance Analysis
Environment distribution. We are unable to release the entire Pablo
environment in binary format because of licensing restrictions with
OSF/Motif.
- /parallel/performance/tools/pablo/NOTES
- Full LICENSE terms - read these first before proceeding.
- /parallel/performance/tools/pablo/README
- Overview of SDDF files.
- /parallel/performance/tools/pablo/SDDF.ps.gz
- The Pablo Self-Defining Data Format
ABSTRACT:
This manual documents the Pablo Self-Defining Data Format (SDDF), a
flexible file meta-format designed to describe the structural layout
of event records in performance trace files. We present our motivation
for developing SDDF, high-level and in-depth coverage of the format
itself, an explanation of the C ++ interface library, and sample files
and code demonstrating the use of SDDF.
Author: Ruth A. Aydt.
- /parallel/performance/tools/pablo/SDDFStatistics.ps.gz
- The Summary File Format for SDDFStatistics
ABSTRACT:
The SDDFStatistics program gathers a variety of statistics about the
data fields of SDDF files, and can save these statistics to summary
files which are themselves in SDDF format. These summary files are
used by SDDFStatistics as "caches" of summary data, to minimize
startup time when interactively browsing SDDF data files using the
graphical user interface provided by SDDFStatistics. They may also be
used by other Pablo tools, including the main Pablo Visualization
Environment, whose displays may be configured more easily using
information contained in the summaries. This document defines and
describes the format of the summary files produced by SDDFStatistics,
and includes an example consisting of an input SDDF data file and the
corresponding summary file generated by SDDFStatistics.
Author: Dave Kohr.
- /parallel/performance/tools/pablo/SDDFlibrary.ps.gz
- A Description of the Classes and Methods of the Pablo SDDF Interface
Library
Author: The Picasso Research Group.
- /parallel/performance/tools/pablo/SDDFlibrary.tar.gz
- Source and include files for the Self Defining Data Format library
and standalone programs for manipulating SDDF files.
- /parallel/performance/tools/pablo/SDDFlibrary.DEC.tar.gz
- Self Defining Data Format include files, Library and Standalone
programs for Decstation 5000/200; ULTRIX V4.2 (Rev. 96).
- /parallel/performance/tools/pablo/SDDFlibrary.Paragon.tar.gz
- Self Defining Data Format include files, Library and Standalone
programs for SparcStation for Paragon OSF/1 Release 1.0.4
- /parallel/performance/tools/pablo/SDDFlibrary.Sparc.tar.gz
- Self Defining Data Format include files, Library and Standalone
programs for SparcStation 10; SunOS 4.1.3; static libs.
- /parallel/performance/tools/pablo/README.Pablo
- Overview of Pablo files.
- /parallel/performance/tools/pablo/Pablo.patches
- An ASCII file containing patches for bugs discovered in this release.
This will be updated as problems are reported and fixed. The file
contains a last-update date.
- /parallel/performance/tools/pablo/PabloOverview.ps.gz
- An Overview of the Pablo Performance Analysis Environment
ABSTRACT:
As massively parallel, distributed memory systems replace traditional
vector supercomputers, effective application program optimization and
system resource management become more than research curiosities |
they are crucial to achieving substantial fractions of peak
performance for scientific application codes. By recording dynamic
activity, either at the application or system software level, one can
identify and remove performance bottlenecks. Pablo is a performance
analysis environment designed to provide performance data capture,
analysis, and presentation across a wide variety of scalable parallel
systems. The Pablo environment includes software performance
instrumentation, graphical performance data reduction and analysis,
and support for mapping performance data to both graphics and sound.
Current research directions include complete performance data
immersion via head-mounted displays and the integration of Pablo with
data parallel Fortran compilers based on the emerging High Performance
Fortran (HPF) standard.
- /parallel/performance/tools/pablo/PabloGuide.ps.gz
- An Informal Guide to Using Pablo
Author: Ruth A. Aydt.
- /parallel/performance/tools/pablo/PabloGuideNoScreens.ps.gz
- An Informal Guide to Using Pablo
As above, but without screen dumps.
Author: Ruth A. Aydt.
- /parallel/performance/tools/pablo/PabloSrc.tar.gz
- Pablo source including the instrumentation, visualization and
sonification systems.
- /parallel/performance/tools/pablo/PabloSrcNoSound.tar.gz
- Pablo source including the instrumentation and visualization systems.
The sonification system is not included.
- /parallel/performance/tools/pablo/Porsonify.tar.gz
- Standalone implementation of our data sonification system. It is
capable of supporting both the sampled audio of the Sun SparcStation
and MIDI synthesizers.
- /parallel/performance/tools/pablo/PorsonifyClasses.ps.gz
- A Description of the Classes and Methods of the Porsonify Audio
Software
Author: The Picasso Research Group.
- /parallel/performance/tools/pablo/PorsonifyThesis.ps.gz
- A Portable System for Data Sonification
Author: Tara Maja Madhyastha.
- /parallel/performance/tools/pablo/PorsonifyUserGuide.ps.gz
- Porsonify: A Portable System for Data Sonification
ABSTRACT:
Porsonify is a portable system for mapping data to sound. Sound is an
interesting medium for presenting data because it can potentially
highlight characteristics of data that cannot easily be seen. For
example, a movie soundtrack or sound effects that accompany a vidio
game convey information complementary to the imagery. The elements of
sound (e.g., pitch, volume, duration and timbre) can be used in the
same way that visual elements (such as color, form, and line) are
manipulated to present and analyze data in visual displays. The use of
sound to present data, the auditory equivalent of visualization, is
called sonification. Porsonify is designed to encourage
experimentation with aural data presentation, or sonification, on a
variety of sound devices.
Author: Tara M. Madhyastha.
- /parallel/performance/tools/pablo/Tracelibrary.tar.gz
- Instrumentation library: binary archive files and include files for
multiple supported architectures; Paragon, iPSC/860 and CM-5.
- /parallel/performance/tools/pablo/README.Tools
- README.Tools
- /parallel/performance/tools/pablo/Instrument.ps.gz
- Pablo Instrumentation Environment User Guide
Author: Roger J. Noe.
- /parallel/performance/tools/pablo/iPabloGuide.ps.gz
- iPablo User's Guide
ABSTRACT:
This document is a guide to the use of the iPablo graphical user
interface to the Pablo Instrumentation Environment. Although using
iPablo is not mandatory, it will allow you to quickly and easily
specify the most common source code instrumentations without
laboriously modifying your source code manually. Because this document
is intended as a user's guide to the iPablo graphical interface of the
Pablo Instrumentation Environment, its focus is on using iPablo to
instrument application program source code in a variety of ways.
iPablo is a Motif-based, X Windows application that currently supports
the interactive instrumentation of procedure calls and outer loops in
C source programs. Future versions of iPablo will support the
interactive instrumentation of Fortran 77 programs as well.
Author: Keith A. Shields.
18th August 1995
- /parallel/languages/sisal/
- SISAL is an acronym for Streams and Iteration in a Single
Assignment Language.
- /parallel/languages/sisal/distribution/
- SISAL distribution from LLNL
- /parallel/languages/sisal/distribution/README
- Overview of files
- /parallel/languages/sisal/distribution/mini-faq.html
- /parallel/languages/sisal/distribution/mini-faq.txt
- Short version of the full SISAL FAQ (frequently asked questions)
- /parallel/languages/sisal/distribution/EXAMPLES-13.0.tar.Z
- Example programs
- /parallel/languages/sisal/distribution/MANUAL.12.7.tar.Z
- Full OSC manual
- /parallel/languages/sisal/distribution/OSC-13.0.2.tar.Z
- Optimizing Sisal Compiler (OSC)
- /parallel/languages/sisal/distribution/OSC_small_installer
- OSC small installation script
- /parallel/languages/sisal/distribution/SISALMODE.1.4.tar.Z
- Sisal emacs mode v1.4
- /parallel/languages/sisal/distribution/SISALMODE.1.3.tar.Z
- Sisal emacs mode v1.3
- /parallel/languages/sisal/distribution/twine.1.2.tar.Z
- Sisal source level breakpoint debugger v1.2 (requires OSC). Versions
for Sun 3, SGI, generic 32bit BSD, and generic 32bit SYSTEM V systems
(unix only)
- /parallel/languages/sisal/distribution/TUTORIAL1.2.tar.Z
- Sisal Tutorial v1.2
- /parallel/languages/sisal/distribution/PATCHES/
- Patches for update rather than entire release
- /parallel/languages/sisal/distribution/PC-SISAL/
- Sisal for MSDOS based machines (binaries and sources)
- /parallel/languages/sisal/distribution/tools/
- Sisal performance tools (not yet completed)
- /parallel/languages/sisal/distribution/bbn.dist.tar.Z
- OSC for BBN systems?
- /parallel/languages/sisal/distribution/osc.convex.tar.Z
- OSC for Convex systems?
- /parallel/languages/sisal/distribution/psa.input.sis
- Some sisal source code
- /parallel/languages/code/
- CODE is a visual parallel programming system. Programs are
created by drawing and then annotating a directed graph that
shows the structure of the parallel program. Nodes represent
sequential computations, and data flows on arcs
interconnecting the nodes, which run in parallel. The CODE
system can produce parallel programs for the Sequent Symmetry
as well as for any PVM-based network of heterogeneous
machines. The system itself runs on Suns.
See also http://www.cs.utexas.edu/users/code/.
- /parallel/languages/code/announcement
- Announcement of CODE 2.1a1 (first alpha release) on 21st July 1995.
The system is licensed and encrypted with PGP so you have to get the
key by email from the author before you can use the system. The
on-line registration form is at
http://www.cs.utexas.edu/users/code/License.html.
Author: Emery Berger <emery@cs.utexas.edu>.
- /parallel/languages/code/code2.1a2-sun.tar.pgp
- PGP encrypted CODE 2.1a2 distribution. PGP and the key are required
to decoded this file. The key can be got from the author (see above).
- /parallel/languages/code/Code2.0-ReferenceManual/
- Code 2.0 Reference Manual
- /parallel/languages/code/Code2.0-UserManual/
- Code 2.0 User Manual
- /parallel/languages/code/CodeICS92.ps.Z
- /parallel/languages/code/DissBook.ps.Z
- /parallel/languages/code/DissUnifiedAppConcDbg.ps.Z
- /parallel/languages/code/Exp_Code_Hence.ps.Z
- /parallel/languages/code/KleynDissBook.ps.Z
- /parallel/languages/code/KleynRecursiveTypes.ps.Z
- /parallel/languages/code/newton_diss.tar.Z
- /parallel/languages/code/SC93tut.ps.Z
- /parallel/languages/code/ut-cs-94-229.ps.Z
- /parallel/languages/code/wet94.ps.Z
- CODE related papers.
17th August 1995
- /parallel/events/mppoi95
- The Second International Conference on Massively Parallel Processing
Using Optical Interconnections
Call for attendance, programme and registration form for conference
being held from 23rd-24th October 1995 at San Antonio, Texas, USA.
Sponsored by IEEE and IEEE Technical Committee on Computer
Architecture (TCCA).
Deadline for lower conference fee is 2nd October 1995.
Author: Eugen Schenfeld <eugen@research.nj.nec.com>.
- /parallel/languages/c/parallel-c++/ucpp/
- uC++ (pronounced micro-C++), is an extended C++ that provides
light-weight concurrency on uniprocessor and multiprocessor
computers running the UNIX operating system. uC++ extends C++
in somewhat the same way that C++ extends C. The extensions
introduce new objects that augment the existing control flow
facilities and provide concurrency. uC++ provides both
direct and indirect communication using the traditional
routine call mechanism. All aspects of uC++ are statically
type-checked. Furthermore, uC++ provides mechanisms for
precisely controlling the order in which requests are
serviced and for postponing work on already-accepted
requests.
- /parallel/languages/c/parallel-c++/ucpp/announcement
- Announcement of version 4.3 of uC++.
- /parallel/languages/c/parallel-c++/ucpp/uSystem/
- uC++ distribution area (requires dmake 4.0 to build)
Author: Peter A. Buhr <pabuhr@plg.uwaterloo.ca>.
- /parallel/languages/c/parallel-c++/ucpp/uSystem/uC++-4.3.ps.gz
- uC++ 4.3 reference manual (document source in main distribution).
Author: Peter A. Buhr <pabuhr@plg.uwaterloo.ca>.
- /parallel/languages/c/parallel-c++/ucpp/uSystem/u++-4.3.tar.gz
- uC++ 4.3 sources and documentation.
Author: Peter A. Buhr <pabuhr@plg.uwaterloo.ca>.
- /parallel/languages/c/parallel-c++/ucpp/uSystem/uSystem.ps.gz
- A copy of the old uSystem reference manual for concurrency in C. This
light-weight thread library is no longer supported.
- /parallel/languages/c/parallel-c++/ucpp/dmake/
- Dmake 4.0 distribution
- /parallel/languages/c/parallel-c++/ucpp/dmake/README
- Overview of Dmake - a make like tool.
Author: Dennis Vadura <dvadura@plg.uwaterloo.ca>.
- /parallel/languages/c/parallel-c++/ucpp/dmake/dmake40-msdos-exe.zip
- /parallel/languages/c/parallel-c++/ucpp/dmake/dmake40.zip
- /parallel/languages/c/parallel-c++/ucpp/dmake/dmake40.tar.gz
- /parallel/languages/c/parallel-c++/ucpp/dmake/dmake40.tar.Z
- Dmake 4.0 distribution in various formats (but not 'shar' as
described in the README).
Author: Dennis Vadura <dvadura@plg.uwaterloo.ca>.
- /parallel/architecture/communications/io/pario/papers/Kotz/nieuwejaar:workload.ps.Z
- /parallel/architecture/communications/io/pario/papers/Kotz/nieuwejaar:workload.note
- File-Access Characteristics of Parallel Scientific Workloads
ABSTRACT:
Phenomenal improvements in the computational performance of
multiprocessors have not been matched by comparable gains in I/O
system performance. This imbalance has resulted in I/O becoming a
significant bottleneck for many scientific applications. One key to
overcoming this bottleneck is improving the performance of parallel
file systems. The design of a high-performance parallel file system
requires a comprehensive understanding of the expected workload.
Unfortunately, until recently, no general workload studies of parallel
file systems have been conducted. The goal of the CHARISMA project was
to remedy this problem by characterizing the behavior of several
production workloads, on different machines, at the level of
individual reads and writes. The first set of results from the
CHARISMA project describe the workloads observed on an Intel iPSC/860
and a Thinking Machines CM-5. This paper is intended to compare and
contrast these two workloads for an understanding of their essential
similarities and differences, isolating common trends and
platform-dependent variances. Using this comparison, we are able to
gain more insight into the general principles that should guide
parallel file-system design. Keywords: parallel I/O, file systems,
workload characterization, file access patterns, multiprocessor file
systems.
Authors: Nils Nieuwejaar and David Kotz
- /parallel/applications/numerical/peigs/
- PeIGS is a public domain collection of parallel routines for
solving the general and standard symmetric eigenproblem. It
is being developed at the Department of Energy's Pacific
Northwest Laboratory operated by Battelle Memorial
Institute. The authors are David Elwood, George Fann, and
Richard Littlefield.
PeIGS runs on Intel Paragon, Intel DELTA, Intel IPSC/860
using native NX calls. PeIGS also runs on IBM SP-1. IBM SP-2,
Cray T3D, SGI-PowerChallenge, and clusters of workstations
using TCGMSG. TCGMSG is available as part of the Global Array
distribution available at
/parallel/software/toolkits/global-array/. A Global
Array-PeIGS interface is available in the Global Array
distribution.
- /parallel/applications/numerical/peigs/announcement
- Announcement of PeIGS.
Author: George I Fann <d3g270@snacker.pnl.gov>.
- /parallel/applications/numerical/peigs/README
- Overview of PeIGS files below and changes.
If you download PeIGS, the authors request that you send email to
gi_fann@pnl.gov giving your name, email address, and a short
description of what you want to do with PeIGS. This way they can send
you email describing any code updates/new bugs/bug fixes. They also
ask that you notify us at the email address above of any problems you
have with the code.
Author: George Fann <gi_fann@pnl.gov>.
- /parallel/applications/numerical/peigs/hpc95_rev01.ps.Z
- Minor revision of the PeIGS paper presented at High Performance
Computing '95.
- /parallel/applications/numerical/peigs/peigs21.tar.Z
- PeIGS sources.
- /parallel/environments/pvm3/povray/
- PVM-Povray - patches for Povray to work with PVM.
- /parallel/environments/pvm3/povray/announcement
- USENET article about PVM-Povray and its use.
Author: Gerry S Hayes <sumner+@CMU.EDU>.
- /parallel/environments/pvm3/povray/pvmpovray.txt
- Overview of PVM'd POV-Ray.
Author: Brad Kline <jbk@cray.com>.
- /parallel/environments/pvm3/povray/pvmpovray.tar.Z
- PVM POV-RAY sources.
- /parallel/environments/pvm3/tkpvm/
- tkPvm is the result of a wedding. The husband is pvm3.3.x
(preferably 3.3.7) and the wife is Tk4.0 or Tk3.6 (preferably
4.0). As usual with a marriage, both sides profit from the
combination. See also http://www.nici.kun.nl/tkpvm/
- /parallel/environments/pvm3/tkpvm/announcement
- Announcement of TKPVM 1.0b1 first beta release, details of
installation and general overview.
Author: Jan Nijtmans <nijtmans@nici.kun.nl>.
- /parallel/environments/pvm3/tkpvm/tkpvm1.0b1.tar.gz
- /parallel/environments/pvm3/tkpvm/tkpvm1.0b1.README
- TkPVM v1.0b1 distribution.
Author: Jan Nijtmans <nijtmans@NICI.KUN.NL>.
- /parallel/environments/pvm3/tkpvm/pvm3.3.8+.patch.gz
- /parallel/environments/pvm3/tkpvm/pvm3.3.8+.patch.README
- PVM3.3.8 (or later) patch for TkPVM
- /parallel/environments/pvm3/tkpvm/pluspatch.html
- /parallel/environments/pvm3/tkpvm/README.patch
- Detailed information of the patches to TCL and PVM (below).
- /parallel/environments/pvm3/tkpvm/tk3.6dash.patch.gz
- A patch for Tk3.6 to have the options "-dashes" and "-outlinestipple"
available for all appropriate canvas objects. Also the "-outline"
option is added for polygons. See also:
http://duizen.dds.nl/~quintess.
Author: Tako Schotanus <sst@bouw.tno.nl>.
- /parallel/environments/pvm3/tkpvm/tk4.0dash.patch.gz
- A patch for Tk4.0 to have the options "-dash" and "-outlinestipple"
available for all appropriate canvas objects. The file doc/canvas.n is
updated for all these changes. See also:
http://www.nici.kun.nl/~nijtmans/tcl/patch.html.
Author: Jan Nijtmans <nijtmans@nici.kun.nl>.
- /parallel/environments/pvm3/tkpvm/tcl7.4+.patch.gz
- /parallel/environments/pvm3/tkpvm/tcl7.4+.patch.README
- A patch for Tcl7.4, which adds support for Shared libraries and
Standalone applications. Look into the file "changes" for more
information.
- /parallel/environments/pvm3/tkpvm/tk4.0+.patch.gz
- /parallel/environments/pvm3/tkpvm/tk4.0+.patch.README
- A patch for Tk4.0, which adds support for Shared libraries and
Standalone applications. Look into the file "changes" for more
information.
- /parallel/environments/pvm3/tkpvm/tcl7.4p1+.patch.gz
- /parallel/environments/pvm3/tkpvm/tcl7.4p1+.patch.README
- A patch for Tcl7.4p1, which adds support for Shared libraries and
Standalone applications. Look into the file "changes" for more
information.
- /parallel/environments/pvm3/tkpvm/tk4.0p1+.patch.gz
- /parallel/environments/pvm3/tkpvm/tk4.0p1+.patch.README
- A patch for Tk4.0p1, which adds support for Shared libraries and
Standalone applications. Look into the file "changes" for more
information.
- /parallel/environments/pvm3/tkpvm/Tix4.0b4+a1.patch.gz
- /parallel/environments/pvm3/tkpvm/Tix4.0b4+a1.patch.README
- A patch for Tix4.0b2, which adds support for Shared libraries.
Standalone support is already prepared, but doesn't work yet.
- /parallel/languages/c/parallel-c++/classes/toops/
- TOOPS (Tool for Object Oriented Protocol Simulation): A C++
class library for process-oriented simulation primarily of
communication protocols. It currently runs under HP UX 9.0
(HP C++ 3.40) and under Borland C++ 3.1, but is highly
portable.
- /parallel/languages/c/parallel-c++/classes/toops/announcement
- Announcement of TOOPS.
Author: Manfred Kraess <kraess@ldvhp26.ldv.e-technik.tu-muenchen.de>.
- /parallel/languages/c/parallel-c++/classes/toops/readme.txt
- Overview of files.
- /parallel/languages/c/parallel-c++/classes/toops/toops12.exe
- /parallel/languages/c/parallel-c++/classes/toops/toops12.zip
- /parallel/languages/c/parallel-c++/classes/toops/toops12.tar.Z
- TOOPS Version 1.2 archived distribution. Includes the sources, a
tutorial and documentation.
- /parallel/environments/chimp/release/chimpv2.1.1a.tar.Z
- CHIMP v2.1.1a source distribution. Makes the distributions below and
other binary versions. Includes a userguide and installation
documentation.
- /parallel/environments/chimp/release/chimp-axposf.tar.Z
- CHIMP binary distribution for DEC Alpha AXP with OSF/1
- /parallel/environments/chimp/release/chimp-sun4.tar.Z
- CHIMP binary distribution for Sun 4 (SPARC)
- /parallel/environments/chimp/release/chimp-sgi5.tar.Z
- CHIMP binary distribution for SGI IRIX 5
- /parallel/environments/pvm3/wamm/announcement
- Announcement of WAMM (Wide Area Metacomputer Manager) 1.0. This is a
graphical interface, built on top of PVM, that helps users in the
following tasks:
virtual machine configuration and management (host adding, removing,
etc.); process management (spawn, kill, etc.); parallel compilation of
an application on several remote nodes; execution of UNIX commands on
remote hosts.
All functions are accessible through a graphical, user friendly
interface.
Author: Marcello-Gianluca <meta@calpar.cnuce.cnr.it>.
- /parallel/environments/pvm3/wamm/README-ITA
- WAMM 1.0 overview. [English / Italian]
- /parallel/environments/pvm3/wamm/wamm10.tar.gz
- WAMM 1.0 sources for the interface and the slave processes.
- /parallel/environments/pvm3/wamm/overview-eng.ps.gz
- /parallel/environments/pvm3/wamm/overview-ita.ps.gz
- CNUCE Technical Report C95-23 which contains a general description of
WAMM, its philosophy and some techical details. [English / Italian]
- /parallel/environments/pvm3/wamm/ug-eng.ps.gz
- /parallel/environments/pvm3/wamm/ug-ita.ps.gz
- CNUCE Techical Report C95-24. User's Guide containing instructions to
get, install, configure and use WAMM. [English / Italian]
14th August 1995
- /parallel/events/podc95
- UPDATED: "14th ACM Annual Symposium on Principles of Distributed Computing"
Call for participation, programme and registration for conference
being held from 20th-23rd August 1995 at Ottowa, Canada. Sponsored by
ACM SIGOPS and SIGACT.
See also http://www.scs.carleton.ca/scs/podc/podc.html and
http://www.cs.cornell.edu/Info/People/chandra/podc95/podc95.html
Author: James H. Anderson <anderson@cs.unc.edu>.
- /parallel/events/concur95
- UPDATED: "Sixth International Conference on Concurrency Theory"
Final program and call for registration for conference being held
from 21st-24th August 1995 at Philadelphia, Pennsylvania, USA.
See also http://www.cis.upenn.edu/concur95/concur95.html or
ftp://ftp.cis.upenn.edu/pub/concur95 for text, dvi and
postscript versions of the Call For Registration and the abstracts of
the accepted papers, as well as the latest information on CONCUR '95.
Author: Richard Gerber <rich@cs.umd.edu>.
- /parallel/events/euro-t3d
- 1st European Cray-T3D Workshop
Call for participation, programme and registration form for workshop
being held from 6th-7th September 1995 at Lausanne, Switzerland.
Sessions include: Single PE performance; Programming models;
Applications; Languages and Tools and T3D Sites experiences.
No Fees are required (appart from accomodation). Deadline for
registration is 31st August 1995.
See also http://patpwww.epfl.ch/T3D_workshop/
- /parallel/events/europvm95
- Second European PVM Users' Group Meeting
Call for participation, programme and registration form for
conference being held from 13th-15th September 1995 at Lyon, France.
Sponsored by IBM-France, IBM-Europe; GDR-PRC Parall'elisme R'eseaux
Syst`eme; Matra Cap Systems; CRAY Research and Ecole normale
sup'erieure de Lyon.
Topics: Tools; Libraries; Extensions and Improvements; Vendors
Implementations; Programming Environments; Numerical Kernels;
Scheduling and Load Balancing; Benchmarking and Pseudo-Applications;
Irregular Structures and Algorithms; CFD; Image Processing; Structural
Analysis; Chemistry; Aerodynamics and others.
See also http://www.ens-lyon.fr/~vigourou/.
Author: Xavier-Francois Vigouroux <vigourou@cri.ens-lyon.fr>.
- /parallel/events/wopar
- Workshop on Parallel Computing (Turkey)
Call for participation for conference being held from 18th-22nd
September 1995 at Ankara, Turkey. Sponsored by METU and IBM.
- /parallel/events/ecmum3
- The Third European CM Users Meeting
Call for presentations, call for participation and registration form
for meeting being held from 16th-17th October 1995 at Parma, Italy.
The ECMUM3 will feature papers regarding computational problems
solved with parallel techniques on CM-2/200, and CM-5 families of
computers.
Deadlines: Abstracts of presentations: 15th September 1995;
Registration: 15th September 1995.
See also http://www.ce.unipr.it/pardis/ecmum/ecmum3.html
Author: Roch Bourbonnais <roch@alofi.etca.fr>.
- /parallel/events/ipca95
- International Conference On Parallel Algorithms (China)
Call for participation and programme for conference being held from
16th-19th October 1995 at Wuhan, China.
Author: Lishan Kang <lkang@ringer.cs.utsa.edu>.
- /parallel/events/ipps96
- Updated: "10th International Parallel Processing Symposium"
Call for papers and participation for conference being held from
15th-19th April 1996 at Sheraton Waikiki Hotel, Honolulu, Hawaii, USA.
Sponsored by IEEE Computer Society Technical Committee on Parallel
Processing in cooperation with ACM SIGARCH.
Workshops: Heterogeneous Computing; Parallel and Distributed Real
Time Systems; Reconfigurable Architectures; I/O in Parallel and
Distributed Systems; High Speed Network Computing; Solving Irregular
Problems on Distributed Memory Machines; Job Scheduling Strategies for
Parallel Processing and Randomized Parallel Computing. Proposals by
29th September 1995.
Topics: Performance Modeling/Evaluation; Signal & Image Processing
Systems; Parallel Implementations of Applications; Parallel
Programming Environments; Parallel Algorithms; Parallel Architectures;
Parallel Languages; Special Purpose Processors; Parallelizing
Compilers; Scientific Computing; VLSI Systems; Network Computing and
others.
Deadlines: Papers: 20th September 1995; Workshops: 29th September
1995; Tutorials: October 31, 1995; Notification: 20th December 1995;
Camera-ready papers: 23rd January 1996.
See also http://www.usc.edu/dept/ceng/prasanna/home.html
Author: D N Jayasimha <djayasim@magnus.acs.ohio-state.edu>.
- /parallel/vendors/inmos/ieee-hic/data/C101-03.ps.gz
- ST C101 (rev 3) Parallel DS-Link Adaptor Datasheet - Preliminary
Datasheet.
This allows high speed serial DS-Links to be interfaced to buses
peripherals and microprocessors. It is particular suitable to
interfacing such devices to interconnects which deal in packets
consisting of data and header information. This header information may
be used to demultiplex packets from different sources and/or route
them through one or more switches. It has two modes of operations - in
the first (Transparent Mode), with packetization disabled, it provides
simple access to the DS-Link, all data provided to the STC101 is
transmitted down the DS-Link. In the second (Packetizing Mode) it can
be used by devices such as processors to use such things as the ST
C104 Asynchronous Packet Switch (APS) [C104.06.ps.gz below] (datasheet
42 1 47 0 05). In both modes it can be used as one of: 16 bit
processor i/f, 32 bit processor i/f or 16 bit processor i/f with token
interfaces. This document includes changes for Revs A and B silicon.
April 1995. 66 pages. 18 Mbytes (18230746 bytes) uncompressed.
Document number 42 1 593 03.
- /parallel/vendors/inmos/ieee-hic/data/C104-06.ps.gz
- ST C104 (rev 6) Asynchronous Packet Switch (APS) Preliminary
Datasheet.
This is a complete, low latency, packet routing switch on a single
chip. It connects 32 high bandwidth serial communications links to
each other via a 32 by 32 way non-blocking crossbar switch, enabling
packets to be routed from any of its links to any other link. The
links operate concurrently and the transfer of a packet between one
pair of links does not affect the data rate or latency for another
packet passing between a second pair of links. Up to 100 Mbits/s on
each link or 19 Mbytes/s on a singe link. Packet rate processing up to
200 Mpackets/s. Data is transmitted in packets with headers and uses
that to wormhole via interval labelling routing and Universal Routing
to eleminate hotspots.
Includes errata from previous datasheets and changes for Rev B. April
1995. 64 pages. 18 Mbytes (18280529 bytes) uncomrpessed. Document
number 42 1 47 0 05.
- /parallel/standards/mpi/anl/patch/1.0.10/848
- Patches for MPI Chameleon version 1.0.10
10th August 1995
- /parallel/standards/mpi/anl/patch/1.0.10/818
- /parallel/standards/mpi/anl/patch/1.0.10/835
- Patches for MPI Chameleon version 1.0.10
- /parallel/standards/mpi/anl/misc/newadi.ps.Z
- Updated: "MPICH Working Note: Creating a new MPICH device
using the Channel interface" by William Gropp and Ewing Lusk.
7th August 1995
- /parallel/journals/Wiley/trcom/latex-styles/trcom03.sty
- Updated LaTeX style file for papers for the Transputer
Communications journal published by Wiley.
- /parallel/architecture/communications/io/pario/papers/Kotz/cormen:integrate-tr.ps.Z
- Updated: "Integrating Theory and Practice in Parallel File Systems"
by Thomas H. Cormen and David Kotz
- /parallel/architecture/communications/io/pario/papers/Kotz/cormen:jintegrate.ps.Z
- Updated: "Integrating Theory and Practice in Parallel File Systems"
by Thomas H. Cormen and David Kotz
- /parallel/architecture/communications/io/pario/pario-beta.bib
- Parallel I/O BibTeX bibliography file 7th Edition (BETA RELEASE).
Author: David Kotz <dfk@cs.dartmouth.edu>.
- /parallel/transputer/software/debuggers/ispy/
- Picked up latest versions of the ISPY transputer network
mapper and mtest transputer memory tester. See also
http://www.rmii.com/~andyr/ispy.html. Written by Andy
Rabagliati <andyr@wizzy.com>
- /parallel/transputer/software/debuggers/ispy/ispytest.txt
- ispy and mtest documentation.
- /parallel/transputer/software/debuggers/ispy/ispy_src.zip
- ispy and mtest source including occam source as tables
- /parallel/transputer/software/debuggers/ispy/ispy_occ.zip
- ispy occam source
- /parallel/transputer/software/debuggers/ispy/ispy_dos.zip
- ispy and mtest for DOS
- /parallel/transputer/software/debuggers/ispy/ispy_lx.zip
- ispy and mtest for Linux using the driver supplied by Christoph
Niemann <niemann@swt.ruhr-uni-bochum.de> at
ftp://ftp.swt.ruhr-uni-bochum.de/pub/linux/transputer/transputer-08.tar.gz.
- /parallel/transputer/software/debuggers/ispy/ispy_os2.zip
- ispy and mtest for OS/2
- /parallel/events/hpf-tutorial-soton.ps
- /parallel/events/hpf-tutorial-soton.txt
- High Performance Fortran tutorial and workshop
Call for participation and registration form for events being held
from 10-11th October 1995 at University of Southampton and Chilworth
Manor, Southampton, UK. The tutorial and workshop are supported by a
grant from the JISC New Technology Initiative.
There will be a tutorial on HPF and relevant features of Fortran 90
(e.g. array syntax) on the afternoon of Tuesday 10 October and a
one-day workshop on HPF will be held on Wednesday 11 October at
Chilworth Manor.
These events are aimed particularly at those involved in Fortran
program development in any scientific or engineering field. They are
intended to be educational, and to provide some knowledge of the uses,
limitations, availability, current research and future plans of HPF.
Delegates can choose to attend either or both events.
Both events will be *free* to staff and students from UK higher
education establishments, including lunches and refreshments. A fee is
payable by other delegates, as detailed in the reservation form at the
end of this message.
See also
http://www.hpcc.ecs.soton.ac.uk/~dbc/public_html/hpf-workshop/
Author: Bryan Carpenter <dbc@ecs.soton.ac.uk>.
- /parallel/environments/pvm3/distribution/xpvm/
- Version 1.1.1 of XPVM graphical console and monitor for PVM
- /parallel/environments/pvm3/distribution/xpvm/xpvm.src.1.1.1.tar.Z.uu.Z
- XPVM v1.1.1 source. Requires PVM 3.3.0, TCL 7.3 and TK 3.6.1 or later
versions.
- /parallel/environments/pvm3/distribution/xpvm/xpvm.alpha.1.1.1.tar.Z.uu.Z
- XPVM DEC Alpha binaries
- /parallel/environments/pvm3/distribution/xpvm/xpvm.rs6k.1.1.1.tar.Z.uu.Z
- XPVM IBM RS6000 binaries
- /parallel/environments/pvm3/distribution/xpvm/xpvm.sun4.1.1.1.tar.Z.uu.Z
- XPVM SUN Sparc binaries
- /parallel/environments/pvm3/distribution/xpvm/xpvm.sgi5.1.1.1.tar.Z.uu.Z
- XPVM SGI IRIX 5.2 binaries
- /parallel/jobs/bothell-usa-dsp-architect
- Advanced Technology Laboratories (ATL), DSP Systems based in Bothell,
Washington State, USA are looking for a DSP system architect who will
lead a level design and implementation for a state-of-the art DSP
platform. MSEE/PH.D required with 10 years experience. It also says
"Principles only" whatever that means, can someone decode this
USA-speak?
Author: Stephen G. Dame <dames@coho.halcyon.com>.
- /parallel/jobs/texas-usa-visiting-professor
- The University of Texas of the Permian Basin, USA invites
applications for an appointment as Visiting Assistant Professor of
Computer Science. The position begins September 1, 1995. Fluent
English speakers only.
Author: Katarzyna M Paprzycka <kmpst6+@pitt.edu>.
- /parallel/standards/mpi/anl/
- New version of MPI Chameleon
- /parallel/standards/mpi/anl/mpich-1.0.10.tar.Z
- MPI Chameleon implementation version 1.0.10 (1st August 1995).
- /parallel/standards/mpi/anl/patch1.0.9-1.0.10
- /parallel/standards/mpi/anl/patch1.0.9-1.0.10.Z
- Patch from MPI Chameleon 1.0.9 to 1.0.10
- /parallel/standards/mpi/anl/userguide.ps.Z
- Users' Guide to mpich, a Portable Implementation of MPI
by Patrick Bridges, Nathan Doss, William Gropp, Edward Karrels, Ewing
Lusk and Anthony Skjellum. July 31, 1995.
ABSTRACT:
MPI (Message-Passing Interface) is a standard specification for
message-passing libraries. mpich is a portable implementation of the
full MPI specification for a wide variety of parallel computing
environments. This paper describes how to build and run MPI programs
using the MPICH implementation of MPI.
- /parallel/standards/mpi/anl/install.ps.Z
- Installation Guide to mpich, a Portable Implementation of MPI
by Patrick Bridges, Nathan Doss, William Gropp, Edward Karrels, Ewing
Lusk and Anthony Skjellum. August 1st, 1995.
ABSTRACT:
MPI (Message-Passing Interface) is a standard specification for
message-passing libraries. mpich is a portable implementation of the
full MPI specification for a wide variety of parallel computing
environments, including workstation clusters and massively parallel
processors (MPPs). mpich contains, along with the MPI library itself,
a programming environment for working with MPI programs. The
programming environment includes a portable startup mechanism, several
profiling libraries for studying the performance of MPI programs, and
an X interface to all of the tools. This guide explains how to
compile, test, and install mpich and its related tools.
- /parallel/standards/mpi/anl/manwww.tar.Z
- HTML versions of the manual pages for MPI and MPE functions.
- /parallel/standards/mpi/anl/nupshot.tar.Z
- Nupshot: A performance visualization tool that displays logfiles in
the 'alog' format or the PICL v.1 format. Requires TCL 7.3 and TK 3.6
to build.
Author: Ed Karrels <karrels@mcs.anl.gov>.
- /parallel/standards/mpi/anl/mpi-test/mpichibm.tar.Z
- A version of ibmtsuite.tar.Z for MPICH; it also contains some bug
fixes and changes to conform to clarifications of the MPIF on MPI-1.
- /parallel/standards/mpi/anl/misc/userguide.tar.Z
- Users' Guide to mpich sources in latex and HTML version of them.
- /parallel/standards/mpi/anl/misc/convex.tar.Z
- Single-file HTML versions of the MPI report and MPICH user's guide.
- /parallel/standards/mpi/anl/mpi2/jul26.ps.Z
- /parallel/standards/mpi/anl/mpi2/jul26.dvi
- Minutes of MPI meeting held from July 26-28, 1995. An unedited set of
minutes taken during this MPI meeting. This contains both a summary of
some of the discussions and official, binding votes of the MPI Forum.
Author: William Gropp <gropp@mcs.anl.gov>.
- /parallel/environments/splash/
- Update to 2nd release of Stanford Parallel Applications for
Shared-Memory (SPLASH-2) suite of multiprocessor
applications.
- /parallel/environments/splash/splash2_isca95.ps.gz
- The SPLASH-2 Programs: Characterization and Methodological
Considerations
by Steven Cameron Woo, Moriyoshi Ohara, Evan Torrie, Jaswinder Pal
Singh and Anoop Gupta, Computer Systems Laboratory, Stanford
University and Department of Computer Science, Princeton University.
Appears in the Proceedings of the 22nd Annual International Symposium
on Computer Architecture, pages 24-36, June 1995.
ABSTRACT:
The SPLASH-2 suite of parallel applications has recently been
released to facilitate the study of centralized and distributed
shared-address-space multiprocessors. In this context, this paper has
two goals. One is to quantitatively characterize the SPLASH-2 programs
in terms of fundamental properties and architectural interactions that
are important to understand them well. The properties we study include
the computational load balance, communication to computa- tion ratio
and traffic needs, important working set sizes, and issues related to
spatial locality, as well as how these properties scale with problem
size and the number of processors. The other, related goal is
methodological: to assist people who will use the programs in
architectural evaluations to prune the space of application and
machine parameters in an informed and meaningful way. For example, by
characterizing the working sets of the applications, we describe which
operating points in terms of cache size and problem size are
representative of realistic situations, which are not, and which are
redundant. Using SPLASH-2 as an example, we hope to convey the
importance of understanding the interplay of problem size, number of
processors, and working sets in designing experiments and interpreting
their results.
- /parallel/environments/splash/codes/shmem_files/sgi/shmem.c
- Source to SPLASH-2 shared memory implementation for SGI machines.
- /parallel/environments/lam/distribution/mpi-poll.txt
- MPI Poll '95 from Ohio Supercomputer Center to find out how
programmers are using MPI, what extensions are needed and how to
prioritize future work. Please fill and email back to
mpi-poll@tbag.osc.edu. Also available at
http://www.osc.edu/Lam/mpi/mpi_poll.html
- /parallel/libraries/numerical/omega-calculator/omega-calc.sun3-sunos4.tar.Z
- Omega Calculator executable and examples, for Sun 3 Sparcstations
running SunOS 4.
- /parallel/environments/pvm3/tape-pvm/SampleTraces/
- Sample traces with Tape/PVM
- /parallel/environments/pvm3/tape-pvm/SampleTraces/ReadMe
- Overview of traces of PVM applications collected with TAPE/PVM.
- /parallel/environments/pvm3/tape-pvm/SampleTraces/fft2d.tgz
- Trace on IBM-SP2, four processors with TCP-IO/Switch and PVM 3.3.7.
- /parallel/transputer/software/OSes/minix/aachen/
- Lots of changes to Transputer MINIX from Aachen. New
versions of LCC binaries and documentation; library sources;
includes etc.
- /parallel/transputer/software/OSes/minix/aachen
-
/parallel/languages/fortran/adaptor/frontend.tar.Z
Adaptor frontend source(?) (requires PUMA and possibly other GMD
compiler tools)
- /parallel/occam/compilers/ocpp/ocpp.tar.Z
- OCPP is an occam pre-processor that is somewhat more friendly than
the rather obscure non-supported INMOS tool PREOCC. The main
advantages are:
1) It comments out lines rather than removing them so toolset error
messages still refer to the relevant source line.
2) For the same reason it is able to reverse it's own effect and
re-construct it's input.
3) It offers more optional methods for defining symbols and extra
directives.
4) It can modify files in-place, and can optionally annotate
conditional directives with useful debugging data.
5) It possesses a (rather lightweight) capability to import constants
from C header files directly into occam source code.
Author: Mark Ian Barlow <Mark@nlcc.demon.co.uk>.
1st August 1995
- /parallel/events/ilps95-workshop-par-logic-prog
- Post-ILPS'95 Conference Workshop on Parallel Logic Programming
Systems
Call for papers for workshop being held after ILPS'95 on 8th December
1995 at Portland, Oregon, USA. Sponsored by the Association for Logic
Programming.
Topics: Parallel Execution Models for Logic Programming; Programming
Languages for Parallel Logic Programming Systems; Parallel
Implementations of Logic Programming Systems; Scheduling for Parallel
Logic Programming Systems; Compilation Techniques for Parallel Logic
Programming Systems; Distributed Logic Programming Systems; Parallel
Logic Programming in the Real World and others.
Deadlines: Abstract / Full Papers: 2nd October 1995; Notification:
27th October 1995; Final papers: 13th November 1995.
See also http://www.up.pt/ilps_ws/
Author: Vitor Santos Costa <vsc@oat.ncc.up.pt>.
- /parallel/events/ica3pp96
- IEEE Second International Conference on Algorithms and Architectures
for Parallel Processing
Call for papers for conference being held from 11th-13th June 1996 at
Singapore. Sponsored by IEEE.
Topics: parallel architectures and systems; parallel algorithms;
parallel programming languages; parallel programming environments and
debugging; distributed operating systems; resource management;
scheduling and workload management; performance evaluation; parallel
I/O systems and interconnection networks; theoretical frameworks for
designing parallel systems; applications and tools and others.
Deadlines: Full Papers: 30th November 1995; Tutorials: 15th December
1995; Notification: 15th march 1996; Camera-ready papers: 15th April
1996.
See also http://www.iscs.nus.sg/ica3pp/.
Author: Kiong Beng Kee <isckbk@leonis.nus.sg>.
[ WoTUG |
Parallel Archive |
Up |
New |
Add |
Search |
Mail |
Help
]
Copyright © 1995 Dave Beckett, University of Kent at Canterbury, UK.