- /parallel/jobs/student-ptools-llnl-usa
- Student co-op position at LLNL, USA
by John M May <johnmay@llnl.gov>
LLNL has a 6-month student co-op postiion to integrate Ptools
projects into their paralell computing environment. Requirements: US
citizens only; BS/MS/PhD in CS or related; C, parallel, unix; Ptools
experience, Tcl/Tk, HTML desirable. See also
http://www.llnl.gov/ptools/
- /parallel/events/amw96
- "Third Workshop on Abstract Machine Models for Parallel and
Distributed Computing" (AMW96)
by D Goodeve <amw96@dcre.leeds.ac.uk>
Call for participation for workshop being held from 15-17th April
1996 at Devonshire Hall, Leeds, UK. Organised by Parallel Processing
Specialist Group of the British Computer Society and the Leeds
University School of Computer Studies.
Deadlines: Position statements: 5th April 1996; Registration: 29th
April 1996.
See also http://agora.leeds.ac.uk/amw96/workshop.html
- /parallel/events/cray-euro-mpp2
- 2nd European Cray MPP Workshop
by D Henty <t3d@aquamarine.epcc.ed.ac.uk>
Call for talks / posters for workshop being held from 25-26th July
1996 at EPCC, Edinburgh, Scotland. Sponsored by UK High Performance
Computing Initiative and Cray UK.
Topics: Applications in Science and Engineering; Cray MPP Systems
Management; Programming Models and Performance Optimisation.
Deadlines: Speakers: 17th May 1996; Attendees: 31st May 1996.
See also http://www.epcc.ed.ac.uk/t3d/workshop/
- /parallel/vendors/cray/cray-euro-workshop1
- "Details of how to get talks given at 1st European CRAY T3D workshop
at Lausanne, Switzerland.
See also T3D workshop home page at
http://patpwww.epfl.ch/T3D_workshop/"
by Zdenek Sekera <zdenek.sekera@cray.epfl.ch>
- /parallel/internet/usenet/comp.parallel/FAQ/
- comp.parallel FAQ parts added - currently have parts
4, 6, 8, 10, 18, 20, 22, 28 /28. Probably just missing part
2/28
- /parallel/libraries/numerical/parallel-blas3-mpi
- Parallel Level 3 BLAS MPI code
by Robert van de Geijn <rvdg@cs.utexas.edu>
Details of Technical Report and software implementing a Parallel
Level 3 BLAS using MPI.
See http://www.cs.utexas.edu/users/rvdg/reports.html for report
and http://www.cs.utexas.edu/users/rvdg/sw/sB_BLAS/ for software
(GNU license)
- /parallel/libraries/communication/fm/
- The Illinois Fast Messages interface contains fast messaging
primitives which provide low latency and high bandwidth for short
messages.
FM has been implemented on several platforms (the Cray T3D and now,
workstation clusters interconnected by a high-speed Myrinet network)
and provides low-latency, high-bandwidth communication for messages as
short as 4 words.
See http://www-csag.cs.uiuc.edu/projects/comm/fm.html for FM
details, how to get software (requires registration)
- /parallel/libraries/communication/fm/Announcement
- Announcement of Illinois Fast Messages library v1.1
by Scott Pakin <pakin@white-elephant.cs.uiuc.edu>
- /parallel/libraries/communication/karma/
- Karma is a toolkit for interprocess communications, authentication,
encryption, graphics display, user interface and manipulating the
Karma network data structure. It contains KarmaLib (the structured
libraries and API) and a large number of modules (applications) to
perform many standard tasks
See also http://wwwatnf.atnf.csiro.au/karma/
Please note that the Karma home site
ftp://ftp.atnf.csiro.au/pub/software/karma/ contains binary
releases of Karma and some other files. They are not included here
since they are rather large.
- /parallel/libraries/communication/karma/Announcement
- Announcement of Karma v1.3
by Richard Gooch <rgooch@rp.CSIRO.AU>
- /parallel/libraries/communication/karma/Release-1.3
- Karma V1.3 Release Notes
- /parallel/libraries/communication/karma/karma.src-v1.3.tar.gz
- Karma V1.3 sources
- /parallel/libraries/communication/karma/mirror-sites
- Mirror sites for Karma
- /parallel/simulation/architectures/paint/
- An instruction set simulator based on Mint. Paint interprets the
PA-RISC instruction set, and has been extended to support the
Avalanche Scalable Computing Project at University of Utah.
See also the publications at
http://www.cs.utah.edu/projects/avalanche/avalanche-publications.html
including the paint paper at
http://www.cs.utah.edu/projects/avalanche/paint.ps
- /parallel/simulation/architectures/paint/paint.tar.Z
- PAINT distribution
by Avalanche Project <avalanche@jensen.cs.utah.edu>
- /parallel/transputer/software/compilers/gcc/pereslavl/
- Update to gcc 2.7.2 for Transputers (unofficial)
- /parallel/transputer/software/compilers/gcc/pereslavl/gcc-2.7.2/changes5
- Changes in gcc-2.7.2-t800.5
- /parallel/transputer/software/compilers/gcc/pereslavl/gcc-2.7.2/gcc-2.7.2-t800.5.dif.gz
- gcc-2.7.2 for t800 (source diff) V5
- /parallel/transputer/software/compilers/gcc/pereslavl/gcc-2.7.2/patch5.gz
- Patch from V4 to V5
- /parallel/standards/mpi/mpimap/MPIMap.tar.Z
- MPIMap 1.1.1 - A tool for visualizing MPI datatypes
by John May <johnmay@llnl.gov>
Must run on the target parallel machine for which the MPI code is
being developed, since it calls MPI to determine datatype layouts and
sizes.
Designed to only with the MPICH implementation of MPI and has been
tested with versions V1.0.10 through V1.0.12 It also requires Tcl and
Tk. it has been tested with Tcl7.3/Tk3.6 and Tcl 7.4/Tk 4.0 and works
best with a colour display.
- /parallel/environments/lam/distribution/lam60-patch.tar.gz
- Patches 01-03 for LAM 6.0
- /parallel/environments/pvm3/emory-vss/tpvm.ps.Z
- Multiparadigm Distributed Computing with TPVM
by Adam Ferrari <ferrari@virginia.edu>, Department of Computer
Science,University of Virginia, Charlottesville, VA 22903, USA and V.
S. Sunderam <vss@mathcs.emory.edu>, Department of Computer
Science,University of Virginia, Charlottesville, VA 22903, USA.
ABSTRACT:
Distributed concurrent computing based on lightweight processes can
potentially address performance and functionality limits in
heterogeneous systems. The TPVM framework, based on the notion of
exportable services, is an extension to the PVM message passing
system, but uses threads as units of computing, scheduling, and
parallelism. TPVM facilitates and supports three different distributed
concurrent programming paradigms: (a) the traditional, task based,
explicit message passing model; (b) a data-driven instantiation model
that enables straightforward specification of computation based on
data dependencies; and (c) a partial shared-address space model via
remote memory access, with naming and typing of distributed data
areas. The latter models offer significantly different computing
paradigms for network-based computing, while maintaining a close
resemblance to, and building upon, the conventional PVM infrastructure
in the interest of compatibility and ease of transition. The TPVM
system comprises three basic modules: a library interface that
provides access to thread-based distributed concurrent computing
facilities, a portable thread interface module which abstracts the
required threads-related services, and a thread server module which
performs scheduling and system data management. System implementation
as well as applications experiences have been very encouraging,
indicating the viability of the proposed models, the feasibility of
portable and efficient threads systems for distributed computing, and
the performance improvements that result from multithreaded concurrent
computing.
- /parallel/events/sce96
- Scientific Computing in Europe (SCE 96)
by Helen Keogan CA <Helen.Keogan@compapp.dcu.ie>
Call for papers for conference being held from 2nd-4th September 1996
at Dublin, Ireland.
Topics: quantum chemical computations/molecular dynamics;
bioinformatics: computers in molecular biology; modelling traffic
flow; forest fires; simulating foams and avalanche effects under
strain; building virtual instruments; immunological cellular automata
and others.
Deadlines: Abstracts: 1st May 1996; Notification: 1st June 1996.
See also http://www.ctc.dcu.ie/conf/sce96.html
- /parallel/events/hpc-comp-modelling
- User Experience Day: HPC for Computation and Modelling
by James Allwright <jra@ecs.soton.ac.uk>
Call for attendance for 1-day course being held on 19th March 1996 at
University of Southampton, UK.
The day will be split into short talks from experienced users both
from the University and from other institutions. The programme will
include speakers with experience of HPC in oceanography, aeronautics,
chemistry, medicine, social statistics and engineering.
See also http://www.hpcc.ecs.soton.ac.uk/events.html
- /parallel/events/prog-mpi
- Programming with MPI
by James Allwright <jra@ecs.soton.ac.uk>
Call for attendance for 1-day course being held on 21st March 1996 at
University of Southampton, UK.
A one day course in 4 parts: introduction to HPC, overview of MPI,
comparison of MPI and PVM, and practical session.
See also http://www.hpcc.ecs.soton.ac.uk/events.html
- /parallel/events/icdcs96
- 16th International Conference on Distributed Computing Systems (ICDCS '96)
by Dr. Mounir Hamdi <hamdi@cs.ust.hk>
Call for participation, advanced program and registration form for
conference being held from 27-30th May 1996 at Hong Kong Convention
and Exhibition Centre, Hong Kong. Sponsored by the IEEE Computer
Society Technical Committee on Distributed Processing in cooperation
with the Faculty of Science and Technology, University of Macau.
See also http://www.cs.ust.hk/ICDCS/ICDCS-Adv.html
- /parallel/events/pc96
- Updated: "8th Joint EPS - APS International Conference on Physics Computing" (Physics Computing'96)
by Zofia Mosurska <pc96@cyf-kr.edu.pl>
Call for papers and attendence, list of invited lecturers and
registration form for conference being held from 17-21st September
1996 at Krakow, Poland. Sponsored by European and American Physical
Societies.
Topics: computer simulation in statistical physics; simulation of
specific materials; surface phenomena; percolation; critical
phenomena; computational fluid dynamics; classical and quantum
molecular dynamics; chaos, dynamical systems; self-organization and
growth; neural networks and their applications; complex optimization.
As well as contemporary trends in hardware and software development:
recent developments in computer architectures; modern programming
techniques (parallel programming, object oriented approach); symbolic
computations; graphics, visualization and animation together with
industrial applications and teaching of computational physics and
others.
Deadlines: Camera-ready Papers: 30th April 1996; Notification: 31st
May 1996.
See also http://www.cyf-kr.edu.pl/pc96/ and
ftp://ftp.cyf-kr.edu.pl/pc96/
- /parallel/events/pdpta96-bus-arch
- Session On Computing on Bus-Based Architectures at PDPTA'96
Call for papers for session at International Conference on Parallel
and Distributed Processing Techniques and Applications (PDPTA'96)
being held from 9th-11th August 1996 at Sunnyvale, California, USA.
Topics: Arrays with multiple broadcasting; Arrays with a
reconfigurable bus system; Arrays with an optical bus system; Arrays
with a reconfigurable pipelined bus system; Algorithm Design
(arithmetic/geometric/graph/numerical/randomised); Embedding of Fixed
Topologies; Image Processing; Time Complexity; Scalability analysis;
Emulation among different models and others.
Deadlines: Papers: 2nd April 1996; Notification: 2nd May 1996;
Camera-ready papers: 21st June 1996.
- /parallel/events/oopar
- Updated: "Object-Oriented Approaches to Parallel Programming"
by Mike Quinn <M.J.Quinn@ecs.soton.ac.uk>
Details of workshop being held on 3rd March 1996 at Chilworth Manor
Conference Centre, Chilworth (near Southampton), England.
Sponsored by EPSRC and the University of Southampton.
A one-day workshop with invited speakers.
See also http://www.hpcc.ecs.soton.ac.uk/~mjq/abstracts.html
- /parallel/events/europar96
- EURO-PAR'96 (merging of CONPAR-VAPP and PARLE)
by Yves Robert <yrobert@cri.ens-lyon.fr>
Call for papers, talks, tutorials and submission form for conference
being held at ENS Lyon, France.
Workshops: Programming environment and tools; Routing and
communication in interconnection networks; Automatic parallelization
and high performance compilers; Distributed systems and algorithms;
Parallel languages, programming and semantics; Parallel non numerical
algorithms; Parallel numerical algorithms; Parallel DSP and image
processing; VLSI design automation; Computer arithmetic; High
performance computing and applications; Theory and models of parallel
computing; Parallel computer architecture; Networks and ATM; Optics
and other new technologies for parallel computation; Neural networks;
Scheduling and load balancing; Critical systems; Performance
evaluation; Instruction level parallelism; High-level and meta-level
control in parallel symbolic programs; Parallel and distributed
databases.
Deadlines: Electronic paper submissions: 18th February 1996;
Notification of acceptance: 10th May 1996; Final papers: 10th June
1996.
See also http://www.ens-lyon.fr/LIP/europar96/
- /parallel/events/europvm
- 3rd European PVM Users' Group Meeting 1996 (EuroPVM'96)
by Roland Wismüller <wismuell@informatik.tu-muenchen.de>,
http://wwwbode.informatik.tu-muenchen.de/~wismuell
Details of event being held at Munich, Germany. Organised by
Technische Universtadt München.
Topics: Experiences with PVM; Parallel and Distributed Applications;
Programming Tools; Future PVM Developments and PVM Tutorials.
Deadlines: Abstracts/papers: 15th April 1996; Camera ready copies of
accepted papers: 8th July 1996; Proposals for BOF sessions: 8th July
1996 and Proposals for vendor presentations: 8th July 1996.
See also http://wwwbode.informatik.tu-muenchen.de/~europvm/
- /parallel/standards/mpi/anl/workingnote/adi2impl.ps.Z
- "MPICH Working Note: The implementation of the second generation
MPICH ADI"
by William Gropp and Ewing Lusk.
ABSTRACT:
The MPICH implementation of the MPI standard is built on a lower
level communications layer called the abstract device interface. The
purpose of this interface is to make it easy to port the MPICH
implementation by separating into a separate module that handles just
the communication between two processes. This note describes an
implementation for this interface that is flexible and efficient. In
addition, this implementation supports multiple devices (e.g., TCP/IP
and shared memory or TCP/IP and propritary interconnect) in a single
MPI application.